Jan 05 13:44:35 localhost kernel: Linux version 5.14.0-654.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 19 08:34:59 UTC 2025
Jan 05 13:44:35 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 05 13:44:35 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 13:44:35 localhost kernel: BIOS-provided physical RAM map:
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 05 13:44:35 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 05 13:44:35 localhost kernel: NX (Execute Disable) protection: active
Jan 05 13:44:35 localhost kernel: APIC: Static calls initialized
Jan 05 13:44:35 localhost kernel: SMBIOS 2.8 present.
Jan 05 13:44:35 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 05 13:44:35 localhost kernel: Hypervisor detected: KVM
Jan 05 13:44:35 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 05 13:44:35 localhost kernel: kvm-clock: using sched offset of 3763766302 cycles
Jan 05 13:44:35 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 05 13:44:35 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 05 13:44:35 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 05 13:44:35 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 05 13:44:35 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 05 13:44:35 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 05 13:44:35 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 05 13:44:35 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 05 13:44:35 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 05 13:44:35 localhost kernel: Using GB pages for direct mapping
Jan 05 13:44:35 localhost kernel: RAMDISK: [mem 0x2d462000-0x32a28fff]
Jan 05 13:44:35 localhost kernel: ACPI: Early table checksum verification disabled
Jan 05 13:44:35 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 05 13:44:35 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 13:44:35 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 13:44:35 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 13:44:35 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 05 13:44:35 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 13:44:35 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 13:44:35 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 05 13:44:35 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 05 13:44:35 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 05 13:44:35 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 05 13:44:35 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 05 13:44:35 localhost kernel: No NUMA configuration found
Jan 05 13:44:35 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 05 13:44:35 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 05 13:44:35 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 05 13:44:35 localhost kernel: Zone ranges:
Jan 05 13:44:35 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 05 13:44:35 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 05 13:44:35 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 05 13:44:35 localhost kernel:   Device   empty
Jan 05 13:44:35 localhost kernel: Movable zone start for each node
Jan 05 13:44:35 localhost kernel: Early memory node ranges
Jan 05 13:44:35 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 05 13:44:35 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 05 13:44:35 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 05 13:44:35 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 05 13:44:35 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 05 13:44:35 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 05 13:44:35 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 05 13:44:35 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 05 13:44:35 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 05 13:44:35 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 05 13:44:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 05 13:44:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 05 13:44:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 05 13:44:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 05 13:44:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 05 13:44:35 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 05 13:44:35 localhost kernel: TSC deadline timer available
Jan 05 13:44:35 localhost kernel: CPU topo: Max. logical packages:   8
Jan 05 13:44:35 localhost kernel: CPU topo: Max. logical dies:       8
Jan 05 13:44:35 localhost kernel: CPU topo: Max. dies per package:   1
Jan 05 13:44:35 localhost kernel: CPU topo: Max. threads per core:   1
Jan 05 13:44:35 localhost kernel: CPU topo: Num. cores per package:     1
Jan 05 13:44:35 localhost kernel: CPU topo: Num. threads per package:   1
Jan 05 13:44:35 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 05 13:44:35 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 05 13:44:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 05 13:44:35 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 05 13:44:35 localhost kernel: Booting paravirtualized kernel on KVM
Jan 05 13:44:35 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 05 13:44:35 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 05 13:44:35 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 05 13:44:35 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 05 13:44:35 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 05 13:44:35 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 05 13:44:35 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 13:44:35 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64", will be passed to user space.
Jan 05 13:44:35 localhost kernel: random: crng init done
Jan 05 13:44:35 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 05 13:44:35 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 05 13:44:35 localhost kernel: Fallback order for Node 0: 0 
Jan 05 13:44:35 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 05 13:44:35 localhost kernel: Policy zone: Normal
Jan 05 13:44:35 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 05 13:44:35 localhost kernel: software IO TLB: area num 8.
Jan 05 13:44:35 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 05 13:44:35 localhost kernel: ftrace: allocating 49413 entries in 194 pages
Jan 05 13:44:35 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 05 13:44:35 localhost kernel: Dynamic Preempt: voluntary
Jan 05 13:44:35 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 05 13:44:35 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 05 13:44:35 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 05 13:44:35 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 05 13:44:35 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 05 13:44:35 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 05 13:44:35 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 05 13:44:35 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 05 13:44:35 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 13:44:35 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 13:44:35 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 13:44:35 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 05 13:44:35 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 05 13:44:35 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 05 13:44:35 localhost kernel: Console: colour VGA+ 80x25
Jan 05 13:44:35 localhost kernel: printk: console [ttyS0] enabled
Jan 05 13:44:35 localhost kernel: ACPI: Core revision 20230331
Jan 05 13:44:35 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 05 13:44:35 localhost kernel: x2apic enabled
Jan 05 13:44:35 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 05 13:44:35 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 05 13:44:35 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 05 13:44:35 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 05 13:44:35 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 05 13:44:35 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 05 13:44:35 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 05 13:44:35 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 05 13:44:35 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 05 13:44:35 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 05 13:44:35 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 05 13:44:35 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 05 13:44:35 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 05 13:44:35 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 05 13:44:35 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 05 13:44:35 localhost kernel: x86/bugs: return thunk changed
Jan 05 13:44:35 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 05 13:44:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 05 13:44:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 05 13:44:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 05 13:44:35 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 05 13:44:35 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 05 13:44:35 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 05 13:44:35 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 05 13:44:35 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 05 13:44:35 localhost kernel: landlock: Up and running.
Jan 05 13:44:35 localhost kernel: Yama: becoming mindful.
Jan 05 13:44:35 localhost kernel: SELinux:  Initializing.
Jan 05 13:44:35 localhost kernel: LSM support for eBPF active
Jan 05 13:44:35 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 05 13:44:35 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 05 13:44:35 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 05 13:44:35 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 05 13:44:35 localhost kernel: ... version:                0
Jan 05 13:44:35 localhost kernel: ... bit width:              48
Jan 05 13:44:35 localhost kernel: ... generic registers:      6
Jan 05 13:44:35 localhost kernel: ... value mask:             0000ffffffffffff
Jan 05 13:44:35 localhost kernel: ... max period:             00007fffffffffff
Jan 05 13:44:35 localhost kernel: ... fixed-purpose events:   0
Jan 05 13:44:35 localhost kernel: ... event mask:             000000000000003f
Jan 05 13:44:35 localhost kernel: signal: max sigframe size: 1776
Jan 05 13:44:35 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 05 13:44:35 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 05 13:44:35 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 05 13:44:35 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 05 13:44:35 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 05 13:44:35 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 05 13:44:35 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 05 13:44:35 localhost kernel: node 0 deferred pages initialised in 27ms
Jan 05 13:44:35 localhost kernel: Memory: 7763904K/8388068K available (16384K kernel code, 5796K rwdata, 13908K rodata, 4196K init, 7200K bss, 618248K reserved, 0K cma-reserved)
Jan 05 13:44:35 localhost kernel: devtmpfs: initialized
Jan 05 13:44:35 localhost kernel: x86/mm: Memory block size: 128MB
Jan 05 13:44:35 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 05 13:44:35 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 05 13:44:35 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 05 13:44:35 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 05 13:44:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 05 13:44:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 05 13:44:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 05 13:44:35 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 05 13:44:35 localhost kernel: audit: type=2000 audit(1767620673.284:1): state=initialized audit_enabled=0 res=1
Jan 05 13:44:35 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 05 13:44:35 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 05 13:44:35 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 05 13:44:35 localhost kernel: cpuidle: using governor menu
Jan 05 13:44:35 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 05 13:44:35 localhost kernel: PCI: Using configuration type 1 for base access
Jan 05 13:44:35 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 05 13:44:35 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 05 13:44:35 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 05 13:44:35 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 05 13:44:35 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 05 13:44:35 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 05 13:44:35 localhost kernel: Demotion targets for Node 0: null
Jan 05 13:44:35 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 05 13:44:35 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 05 13:44:35 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 05 13:44:35 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 05 13:44:35 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 05 13:44:35 localhost kernel: ACPI: Interpreter enabled
Jan 05 13:44:35 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 05 13:44:35 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 05 13:44:35 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 05 13:44:35 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 05 13:44:35 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 05 13:44:35 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 05 13:44:35 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [3] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [4] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [5] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [6] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [7] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [8] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [9] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [10] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [11] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [12] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [13] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [14] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [15] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [16] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [17] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [18] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [19] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [20] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [21] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [22] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [23] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [24] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [25] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [26] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [27] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [28] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [29] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [30] registered
Jan 05 13:44:35 localhost kernel: acpiphp: Slot [31] registered
Jan 05 13:44:35 localhost kernel: PCI host bridge to bus 0000:00
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 05 13:44:35 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 05 13:44:35 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 05 13:44:35 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 05 13:44:35 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 05 13:44:35 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 05 13:44:35 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 05 13:44:35 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 05 13:44:35 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 05 13:44:35 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 05 13:44:35 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 05 13:44:35 localhost kernel: iommu: Default domain type: Translated
Jan 05 13:44:35 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 05 13:44:35 localhost kernel: SCSI subsystem initialized
Jan 05 13:44:35 localhost kernel: ACPI: bus type USB registered
Jan 05 13:44:35 localhost kernel: usbcore: registered new interface driver usbfs
Jan 05 13:44:35 localhost kernel: usbcore: registered new interface driver hub
Jan 05 13:44:35 localhost kernel: usbcore: registered new device driver usb
Jan 05 13:44:35 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 05 13:44:35 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 05 13:44:35 localhost kernel: PTP clock support registered
Jan 05 13:44:35 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 05 13:44:35 localhost kernel: NetLabel: Initializing
Jan 05 13:44:35 localhost kernel: NetLabel:  domain hash size = 128
Jan 05 13:44:35 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 05 13:44:35 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 05 13:44:35 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 05 13:44:35 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 05 13:44:35 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 05 13:44:35 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 05 13:44:35 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 05 13:44:35 localhost kernel: vgaarb: loaded
Jan 05 13:44:35 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 05 13:44:35 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 05 13:44:35 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 05 13:44:35 localhost kernel: pnp: PnP ACPI init
Jan 05 13:44:35 localhost kernel: pnp 00:03: [dma 2]
Jan 05 13:44:35 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 05 13:44:35 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 05 13:44:35 localhost kernel: NET: Registered PF_INET protocol family
Jan 05 13:44:35 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 05 13:44:35 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 05 13:44:35 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 05 13:44:35 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 05 13:44:35 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 05 13:44:35 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 05 13:44:35 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 05 13:44:35 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 05 13:44:35 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 05 13:44:35 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 05 13:44:35 localhost kernel: NET: Registered PF_XDP protocol family
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 05 13:44:35 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 05 13:44:35 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 05 13:44:35 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 05 13:44:35 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71175 usecs
Jan 05 13:44:35 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 05 13:44:35 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 05 13:44:35 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 05 13:44:35 localhost kernel: ACPI: bus type thunderbolt registered
Jan 05 13:44:35 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 05 13:44:35 localhost kernel: Initialise system trusted keyrings
Jan 05 13:44:35 localhost kernel: Key type blacklist registered
Jan 05 13:44:35 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 05 13:44:35 localhost kernel: zbud: loaded
Jan 05 13:44:35 localhost kernel: integrity: Platform Keyring initialized
Jan 05 13:44:35 localhost kernel: integrity: Machine keyring initialized
Jan 05 13:44:35 localhost kernel: Freeing initrd memory: 87836K
Jan 05 13:44:35 localhost kernel: NET: Registered PF_ALG protocol family
Jan 05 13:44:35 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 05 13:44:35 localhost kernel: Key type asymmetric registered
Jan 05 13:44:35 localhost kernel: Asymmetric key parser 'x509' registered
Jan 05 13:44:35 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 05 13:44:35 localhost kernel: io scheduler mq-deadline registered
Jan 05 13:44:35 localhost kernel: io scheduler kyber registered
Jan 05 13:44:35 localhost kernel: io scheduler bfq registered
Jan 05 13:44:35 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 05 13:44:35 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 05 13:44:35 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 05 13:44:35 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 05 13:44:35 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 05 13:44:35 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 05 13:44:35 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 05 13:44:35 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 05 13:44:35 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 05 13:44:35 localhost kernel: Non-volatile memory driver v1.3
Jan 05 13:44:35 localhost kernel: rdac: device handler registered
Jan 05 13:44:35 localhost kernel: hp_sw: device handler registered
Jan 05 13:44:35 localhost kernel: emc: device handler registered
Jan 05 13:44:35 localhost kernel: alua: device handler registered
Jan 05 13:44:35 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 05 13:44:35 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 05 13:44:35 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 05 13:44:35 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 05 13:44:35 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 05 13:44:35 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 05 13:44:35 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 05 13:44:35 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-654.el9.x86_64 uhci_hcd
Jan 05 13:44:35 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 05 13:44:35 localhost kernel: hub 1-0:1.0: USB hub found
Jan 05 13:44:35 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 05 13:44:35 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 05 13:44:35 localhost kernel: usbserial: USB Serial support registered for generic
Jan 05 13:44:35 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 05 13:44:35 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 05 13:44:35 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 05 13:44:35 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 05 13:44:35 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 05 13:44:35 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 05 13:44:35 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 05 13:44:35 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-05T13:44:34 UTC (1767620674)
Jan 05 13:44:35 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 05 13:44:35 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 05 13:44:35 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 05 13:44:35 localhost kernel: usbcore: registered new interface driver usbhid
Jan 05 13:44:35 localhost kernel: usbhid: USB HID core driver
Jan 05 13:44:35 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 05 13:44:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 05 13:44:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 05 13:44:35 localhost kernel: Initializing XFRM netlink socket
Jan 05 13:44:35 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 05 13:44:35 localhost kernel: Segment Routing with IPv6
Jan 05 13:44:35 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 05 13:44:35 localhost kernel: mpls_gso: MPLS GSO support
Jan 05 13:44:35 localhost kernel: IPI shorthand broadcast: enabled
Jan 05 13:44:35 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 05 13:44:35 localhost kernel: AES CTR mode by8 optimization enabled
Jan 05 13:44:35 localhost kernel: sched_clock: Marking stable (1552016183, 148497973)->(1779684725, -79170569)
Jan 05 13:44:35 localhost kernel: registered taskstats version 1
Jan 05 13:44:35 localhost kernel: Loading compiled-in X.509 certificates
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 1033950e50bfbfa81c0905119b09a8a13ebc27cf'
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 05 13:44:35 localhost kernel: Demotion targets for Node 0: null
Jan 05 13:44:35 localhost kernel: page_owner is disabled
Jan 05 13:44:35 localhost kernel: Key type .fscrypt registered
Jan 05 13:44:35 localhost kernel: Key type fscrypt-provisioning registered
Jan 05 13:44:35 localhost kernel: Key type big_key registered
Jan 05 13:44:35 localhost kernel: Key type encrypted registered
Jan 05 13:44:35 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 05 13:44:35 localhost kernel: Loading compiled-in module X.509 certificates
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 1033950e50bfbfa81c0905119b09a8a13ebc27cf'
Jan 05 13:44:35 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 05 13:44:35 localhost kernel: ima: No architecture policies found
Jan 05 13:44:35 localhost kernel: evm: Initialising EVM extended attributes:
Jan 05 13:44:35 localhost kernel: evm: security.selinux
Jan 05 13:44:35 localhost kernel: evm: security.SMACK64 (disabled)
Jan 05 13:44:35 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 05 13:44:35 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 05 13:44:35 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 05 13:44:35 localhost kernel: evm: security.apparmor (disabled)
Jan 05 13:44:35 localhost kernel: evm: security.ima
Jan 05 13:44:35 localhost kernel: evm: security.capability
Jan 05 13:44:35 localhost kernel: evm: HMAC attrs: 0x1
Jan 05 13:44:35 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 05 13:44:35 localhost kernel: Running certificate verification RSA selftest
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 05 13:44:35 localhost kernel: Running certificate verification ECDSA selftest
Jan 05 13:44:35 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 05 13:44:35 localhost kernel: clk: Disabling unused clocks
Jan 05 13:44:35 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 05 13:44:35 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 05 13:44:35 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 05 13:44:35 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Jan 05 13:44:35 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 05 13:44:35 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 05 13:44:35 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 05 13:44:35 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 05 13:44:35 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 05 13:44:35 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 05 13:44:35 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 05 13:44:35 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 05 13:44:35 localhost kernel: Run /init as init process
Jan 05 13:44:35 localhost kernel:   with arguments:
Jan 05 13:44:35 localhost kernel:     /init
Jan 05 13:44:35 localhost kernel:   with environment:
Jan 05 13:44:35 localhost kernel:     HOME=/
Jan 05 13:44:35 localhost kernel:     TERM=linux
Jan 05 13:44:35 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64
Jan 05 13:44:35 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 05 13:44:35 localhost systemd[1]: Detected virtualization kvm.
Jan 05 13:44:35 localhost systemd[1]: Detected architecture x86-64.
Jan 05 13:44:35 localhost systemd[1]: Running in initrd.
Jan 05 13:44:35 localhost systemd[1]: No hostname configured, using default hostname.
Jan 05 13:44:35 localhost systemd[1]: Hostname set to <localhost>.
Jan 05 13:44:35 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 05 13:44:35 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 05 13:44:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 05 13:44:35 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 05 13:44:35 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 05 13:44:35 localhost systemd[1]: Reached target Local File Systems.
Jan 05 13:44:35 localhost systemd[1]: Reached target Path Units.
Jan 05 13:44:35 localhost systemd[1]: Reached target Slice Units.
Jan 05 13:44:35 localhost systemd[1]: Reached target Swaps.
Jan 05 13:44:35 localhost systemd[1]: Reached target Timer Units.
Jan 05 13:44:35 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 05 13:44:35 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 05 13:44:35 localhost systemd[1]: Listening on Journal Socket.
Jan 05 13:44:35 localhost systemd[1]: Listening on udev Control Socket.
Jan 05 13:44:35 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 05 13:44:35 localhost systemd[1]: Reached target Socket Units.
Jan 05 13:44:35 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 05 13:44:35 localhost systemd[1]: Starting Journal Service...
Jan 05 13:44:35 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 05 13:44:35 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 05 13:44:35 localhost systemd[1]: Starting Create System Users...
Jan 05 13:44:35 localhost systemd[1]: Starting Setup Virtual Console...
Jan 05 13:44:35 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 05 13:44:35 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 05 13:44:35 localhost systemd[1]: Finished Create System Users.
Jan 05 13:44:35 localhost systemd-journald[304]: Journal started
Jan 05 13:44:35 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/21aea88de46b43caa8527ac5c1bf4054) is 8.0M, max 153.6M, 145.6M free.
Jan 05 13:44:35 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 05 13:44:35 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 05 13:44:35 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 05 13:44:35 localhost systemd[1]: Started Journal Service.
Jan 05 13:44:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 05 13:44:35 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 05 13:44:35 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 05 13:44:35 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 05 13:44:35 localhost systemd[1]: Finished Setup Virtual Console.
Jan 05 13:44:35 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 05 13:44:35 localhost systemd[1]: Starting dracut cmdline hook...
Jan 05 13:44:35 localhost dracut-cmdline[322]: dracut-9 dracut-057-102.git20250818.el9
Jan 05 13:44:35 localhost dracut-cmdline[322]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 13:44:35 localhost systemd[1]: Finished dracut cmdline hook.
Jan 05 13:44:35 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 05 13:44:35 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 05 13:44:35 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 05 13:44:35 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 05 13:44:35 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 05 13:44:35 localhost kernel: RPC: Registered udp transport module.
Jan 05 13:44:35 localhost kernel: RPC: Registered tcp transport module.
Jan 05 13:44:35 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 05 13:44:35 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 05 13:44:35 localhost rpc.statd[440]: Version 2.5.4 starting
Jan 05 13:44:35 localhost rpc.statd[440]: Initializing NSM state
Jan 05 13:44:35 localhost rpc.idmapd[445]: Setting log level to 0
Jan 05 13:44:35 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 05 13:44:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 05 13:44:36 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Jan 05 13:44:36 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 05 13:44:36 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 05 13:44:36 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 05 13:44:36 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 05 13:44:36 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 05 13:44:36 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 13:44:36 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 05 13:44:36 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 13:44:36 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 13:44:36 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 05 13:44:36 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 05 13:44:36 localhost systemd[1]: Reached target Network.
Jan 05 13:44:36 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 05 13:44:36 localhost systemd[1]: Starting dracut initqueue hook...
Jan 05 13:44:36 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 05 13:44:36 localhost systemd[1]: Reached target System Initialization.
Jan 05 13:44:36 localhost systemd[1]: Reached target Basic System.
Jan 05 13:44:36 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 05 13:44:36 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 05 13:44:36 localhost kernel:  vda: vda1
Jan 05 13:44:36 localhost kernel: libata version 3.00 loaded.
Jan 05 13:44:36 localhost systemd[1]: Found device /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b.
Jan 05 13:44:36 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 05 13:44:36 localhost kernel: scsi host0: ata_piix
Jan 05 13:44:36 localhost kernel: scsi host1: ata_piix
Jan 05 13:44:36 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 05 13:44:36 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 05 13:44:36 localhost systemd-udevd[497]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 13:44:36 localhost systemd[1]: Reached target Initrd Root Device.
Jan 05 13:44:36 localhost kernel: ata1: found unknown device (class 0)
Jan 05 13:44:36 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 05 13:44:36 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 05 13:44:36 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 05 13:44:36 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 05 13:44:36 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 05 13:44:36 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 05 13:44:36 localhost systemd[1]: Finished dracut initqueue hook.
Jan 05 13:44:36 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 05 13:44:36 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 05 13:44:36 localhost systemd[1]: Reached target Remote File Systems.
Jan 05 13:44:36 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 05 13:44:36 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 05 13:44:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b...
Jan 05 13:44:36 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 05 13:44:36 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b.
Jan 05 13:44:36 localhost systemd[1]: Mounting /sysroot...
Jan 05 13:44:37 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 05 13:44:37 localhost kernel: XFS (vda1): Mounting V5 Filesystem f677d6a5-1bcd-4a82-bb95-263d2adaa51b
Jan 05 13:44:37 localhost kernel: XFS (vda1): Ending clean mount
Jan 05 13:44:37 localhost systemd[1]: Mounted /sysroot.
Jan 05 13:44:37 localhost systemd[1]: Reached target Initrd Root File System.
Jan 05 13:44:37 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 05 13:44:37 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 05 13:44:37 localhost systemd[1]: Reached target Initrd File Systems.
Jan 05 13:44:37 localhost systemd[1]: Reached target Initrd Default Target.
Jan 05 13:44:37 localhost systemd[1]: Starting dracut mount hook...
Jan 05 13:44:37 localhost systemd[1]: Finished dracut mount hook.
Jan 05 13:44:37 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 05 13:44:37 localhost rpc.idmapd[445]: exiting on signal 15
Jan 05 13:44:37 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 05 13:44:37 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 05 13:44:37 localhost systemd[1]: Stopped target Network.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Timer Units.
Jan 05 13:44:37 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 05 13:44:37 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Basic System.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Path Units.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Remote File Systems.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Slice Units.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Socket Units.
Jan 05 13:44:37 localhost systemd[1]: Stopped target System Initialization.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Local File Systems.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Swaps.
Jan 05 13:44:37 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut mount hook.
Jan 05 13:44:37 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 05 13:44:37 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 05 13:44:37 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 05 13:44:37 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 05 13:44:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 05 13:44:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 05 13:44:37 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 05 13:44:37 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 05 13:44:37 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 05 13:44:37 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 05 13:44:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 05 13:44:37 localhost systemd[1]: systemd-udevd.service: Consumed 1.016s CPU time.
Jan 05 13:44:37 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 05 13:44:37 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Closed udev Control Socket.
Jan 05 13:44:37 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Closed udev Kernel Socket.
Jan 05 13:44:37 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 05 13:44:37 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 05 13:44:37 localhost systemd[1]: Starting Cleanup udev Database...
Jan 05 13:44:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 05 13:44:37 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 05 13:44:37 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Stopped Create System Users.
Jan 05 13:44:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 05 13:44:37 localhost systemd[1]: Finished Cleanup udev Database.
Jan 05 13:44:37 localhost systemd[1]: Reached target Switch Root.
Jan 05 13:44:37 localhost systemd[1]: Starting Switch Root...
Jan 05 13:44:37 localhost systemd[1]: Switching root.
Jan 05 13:44:37 localhost systemd-journald[304]: Journal stopped
Jan 05 13:44:38 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Jan 05 13:44:38 localhost kernel: audit: type=1404 audit(1767620677.906:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability open_perms=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 13:44:38 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 13:44:38 localhost kernel: audit: type=1403 audit(1767620678.063:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 05 13:44:38 localhost systemd[1]: Successfully loaded SELinux policy in 161.484ms.
Jan 05 13:44:38 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.104ms.
Jan 05 13:44:38 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 05 13:44:38 localhost systemd[1]: Detected virtualization kvm.
Jan 05 13:44:38 localhost systemd[1]: Detected architecture x86-64.
Jan 05 13:44:38 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 13:44:38 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Stopped Switch Root.
Jan 05 13:44:38 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 05 13:44:38 localhost systemd[1]: Created slice Slice /system/getty.
Jan 05 13:44:38 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 05 13:44:38 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 05 13:44:38 localhost systemd[1]: Created slice User and Session Slice.
Jan 05 13:44:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 05 13:44:38 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 05 13:44:38 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 05 13:44:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 05 13:44:38 localhost systemd[1]: Stopped target Switch Root.
Jan 05 13:44:38 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 05 13:44:38 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 05 13:44:38 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 05 13:44:38 localhost systemd[1]: Reached target Path Units.
Jan 05 13:44:38 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 05 13:44:38 localhost systemd[1]: Reached target Slice Units.
Jan 05 13:44:38 localhost systemd[1]: Reached target Swaps.
Jan 05 13:44:38 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 05 13:44:38 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 05 13:44:38 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 05 13:44:38 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 05 13:44:38 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 05 13:44:38 localhost systemd[1]: Listening on udev Control Socket.
Jan 05 13:44:38 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 05 13:44:38 localhost systemd[1]: Mounting Huge Pages File System...
Jan 05 13:44:38 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 05 13:44:38 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 05 13:44:38 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 05 13:44:38 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 05 13:44:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 05 13:44:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 13:44:38 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 05 13:44:38 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 05 13:44:38 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 05 13:44:38 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 05 13:44:38 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 05 13:44:38 localhost systemd[1]: Stopped Journal Service.
Jan 05 13:44:38 localhost kernel: fuse: init (API version 7.37)
Jan 05 13:44:38 localhost systemd[1]: Starting Journal Service...
Jan 05 13:44:38 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 05 13:44:38 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 05 13:44:38 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 13:44:38 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 05 13:44:38 localhost systemd-journald[676]: Journal started
Jan 05 13:44:38 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/f46796bb2b37cbb1d783b32fbf8770cb) is 8.0M, max 153.6M, 145.6M free.
Jan 05 13:44:38 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 05 13:44:38 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 05 13:44:38 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 05 13:44:38 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 05 13:44:38 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 05 13:44:38 localhost systemd[1]: Started Journal Service.
Jan 05 13:44:38 localhost systemd[1]: Mounted Huge Pages File System.
Jan 05 13:44:38 localhost kernel: ACPI: bus type drm_connector registered
Jan 05 13:44:38 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 05 13:44:38 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 05 13:44:38 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 05 13:44:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 05 13:44:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 13:44:38 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 05 13:44:38 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 05 13:44:38 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 05 13:44:38 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 05 13:44:38 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 05 13:44:38 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 05 13:44:38 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 05 13:44:38 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 05 13:44:38 localhost systemd[1]: Mounting FUSE Control File System...
Jan 05 13:44:38 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 05 13:44:38 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 05 13:44:38 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 05 13:44:38 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 05 13:44:38 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 05 13:44:38 localhost systemd[1]: Starting Create System Users...
Jan 05 13:44:38 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/f46796bb2b37cbb1d783b32fbf8770cb) is 8.0M, max 153.6M, 145.6M free.
Jan 05 13:44:38 localhost systemd-journald[676]: Received client request to flush runtime journal.
Jan 05 13:44:38 localhost systemd[1]: Mounted FUSE Control File System.
Jan 05 13:44:38 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 05 13:44:38 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 05 13:44:38 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 05 13:44:38 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 05 13:44:38 localhost systemd[1]: Finished Create System Users.
Jan 05 13:44:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 05 13:44:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 05 13:44:38 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 05 13:44:38 localhost systemd[1]: Reached target Local File Systems.
Jan 05 13:44:38 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 05 13:44:38 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 05 13:44:38 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 05 13:44:38 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 05 13:44:38 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 05 13:44:38 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 05 13:44:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 05 13:44:38 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Jan 05 13:44:38 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 05 13:44:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 05 13:44:38 localhost systemd[1]: Starting Security Auditing Service...
Jan 05 13:44:38 localhost systemd[1]: Starting RPC Bind...
Jan 05 13:44:38 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 05 13:44:38 localhost auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 05 13:44:38 localhost auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 05 13:44:39 localhost systemd[1]: Started RPC Bind.
Jan 05 13:44:39 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 05 13:44:39 localhost augenrules[706]: /sbin/augenrules: No change
Jan 05 13:44:39 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 05 13:44:39 localhost augenrules[721]: No rules
Jan 05 13:44:39 localhost augenrules[721]: enabled 1
Jan 05 13:44:39 localhost augenrules[721]: failure 1
Jan 05 13:44:39 localhost augenrules[721]: pid 701
Jan 05 13:44:39 localhost augenrules[721]: rate_limit 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_limit 8192
Jan 05 13:44:39 localhost augenrules[721]: lost 0
Jan 05 13:44:39 localhost augenrules[721]: backlog 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time 60000
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 05 13:44:39 localhost augenrules[721]: enabled 1
Jan 05 13:44:39 localhost augenrules[721]: failure 1
Jan 05 13:44:39 localhost augenrules[721]: pid 701
Jan 05 13:44:39 localhost augenrules[721]: rate_limit 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_limit 8192
Jan 05 13:44:39 localhost augenrules[721]: lost 0
Jan 05 13:44:39 localhost augenrules[721]: backlog 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time 60000
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 05 13:44:39 localhost augenrules[721]: enabled 1
Jan 05 13:44:39 localhost augenrules[721]: failure 1
Jan 05 13:44:39 localhost augenrules[721]: pid 701
Jan 05 13:44:39 localhost augenrules[721]: rate_limit 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_limit 8192
Jan 05 13:44:39 localhost augenrules[721]: lost 0
Jan 05 13:44:39 localhost augenrules[721]: backlog 0
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time 60000
Jan 05 13:44:39 localhost augenrules[721]: backlog_wait_time_actual 0
Jan 05 13:44:39 localhost systemd[1]: Started Security Auditing Service.
Jan 05 13:44:39 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 05 13:44:39 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 05 13:44:39 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 05 13:44:39 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 05 13:44:39 localhost systemd[1]: Starting Update is Completed...
Jan 05 13:44:39 localhost systemd[1]: Finished Update is Completed.
Jan 05 13:44:39 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 05 13:44:39 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 05 13:44:39 localhost systemd[1]: Reached target System Initialization.
Jan 05 13:44:39 localhost systemd[1]: Started dnf makecache --timer.
Jan 05 13:44:39 localhost systemd[1]: Started Daily rotation of log files.
Jan 05 13:44:39 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 05 13:44:39 localhost systemd[1]: Reached target Timer Units.
Jan 05 13:44:39 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 05 13:44:39 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 05 13:44:39 localhost systemd[1]: Reached target Socket Units.
Jan 05 13:44:39 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 05 13:44:39 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 13:44:39 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 13:44:39 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 05 13:44:39 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 13:44:39 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 13:44:39 localhost systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 13:44:39 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 05 13:44:39 localhost systemd[1]: Reached target Basic System.
Jan 05 13:44:39 localhost dbus-broker-lau[738]: Ready
Jan 05 13:44:39 localhost systemd[1]: Starting NTP client/server...
Jan 05 13:44:39 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 05 13:44:39 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 05 13:44:39 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 05 13:44:39 localhost systemd[1]: Started irqbalance daemon.
Jan 05 13:44:39 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 05 13:44:39 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 13:44:39 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 13:44:39 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 13:44:39 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 05 13:44:39 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 05 13:44:39 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 05 13:44:39 localhost chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 05 13:44:39 localhost chronyd[785]: Loaded 0 symmetric keys
Jan 05 13:44:39 localhost chronyd[785]: Using right/UTC timezone to obtain leap second data
Jan 05 13:44:39 localhost chronyd[785]: Loaded seccomp filter (level 2)
Jan 05 13:44:39 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 05 13:44:39 localhost systemd[1]: Starting User Login Management...
Jan 05 13:44:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 05 13:44:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 05 13:44:39 localhost systemd[1]: Started NTP client/server.
Jan 05 13:44:39 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 05 13:44:39 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 05 13:44:39 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 05 13:44:39 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 05 13:44:39 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 05 13:44:39 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 05 13:44:39 localhost kernel: Console: switching to colour dummy device 80x25
Jan 05 13:44:39 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 05 13:44:39 localhost kernel: [drm] features: -context_init
Jan 05 13:44:39 localhost kernel: [drm] number of scanouts: 1
Jan 05 13:44:39 localhost kernel: [drm] number of cap sets: 0
Jan 05 13:44:39 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 05 13:44:39 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 05 13:44:39 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 05 13:44:39 localhost systemd-logind[795]: New seat seat0.
Jan 05 13:44:39 localhost systemd[1]: Started User Login Management.
Jan 05 13:44:39 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 05 13:44:39 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 05 13:44:39 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 05 13:44:39 localhost kernel: kvm_amd: TSC scaling supported
Jan 05 13:44:39 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 05 13:44:39 localhost kernel: kvm_amd: Nested Paging enabled
Jan 05 13:44:39 localhost kernel: kvm_amd: LBR virtualization supported
Jan 05 13:44:39 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Jan 05 13:44:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 05 13:44:40 localhost cloud-init[838]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 05 Jan 2026 13:44:39 +0000. Up 6.98 seconds.
Jan 05 13:44:40 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 05 13:44:40 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 05 13:44:40 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpr2zrbusk.mount: Deactivated successfully.
Jan 05 13:44:40 localhost systemd[1]: Starting Hostname Service...
Jan 05 13:44:40 localhost systemd[1]: Started Hostname Service.
Jan 05 13:44:40 np0005574500.novalocal systemd-hostnamed[852]: Hostname set to <np0005574500.novalocal> (static)
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Reached target Preparation for Network.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Starting Network Manager...
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5461] NetworkManager (version 1.54.2-1.el9) is starting... (boot:4a842e6d-ff22-4aef-a67c-1e6f6b9a395f)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5466] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5529] manager[0x555f05f18000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5567] hostname: hostname: using hostnamed
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5568] hostname: static hostname changed from (none) to "np0005574500.novalocal"
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5571] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5733] manager[0x555f05f18000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5734] manager[0x555f05f18000]: rfkill: WWAN hardware radio set enabled
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5780] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5780] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5781] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5781] manager: Networking is enabled by state file
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5784] settings: Loaded settings plugin: keyfile (internal)
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5794] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5820] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5831] dhcp: init: Using DHCP client 'internal'
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5834] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5848] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5855] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5863] device (lo): Activation: starting connection 'lo' (3df85b44-84fa-4707-aff2-a3490d11ca8e)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5874] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5878] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5924] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5928] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5930] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5932] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5935] device (eth0): carrier: link connected
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5939] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5945] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5953] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5957] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5958] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Started Network Manager.
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5963] manager: NetworkManager state is now CONNECTING
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5964] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5971] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.5974] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Reached target Network.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.6224] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.6231] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 13:44:40 np0005574500.novalocal NetworkManager[857]: <info>  [1767620680.6240] device (lo): Activation: successful, device activated.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Reached target NFS client services.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: Reached target Remote File Systems.
Jan 05 13:44:40 np0005574500.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2030] dhcp4 (eth0): state changed new lease, address=38.102.83.115
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2044] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2075] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2107] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2110] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2118] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2123] device (eth0): Activation: successful, device activated.
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2134] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 13:44:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620682.2139] manager: startup complete
Jan 05 13:44:42 np0005574500.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 05 13:44:42 np0005574500.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 05 Jan 2026 13:44:42 +0000. Up 9.49 seconds.
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.115         | 255.255.255.0 | global | fa:16:3e:8c:90:5f |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe8c:905f/64 |       .       |  link  | fa:16:3e:8c:90:5f |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Jan 05 13:44:42 np0005574500.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Jan 05 13:44:43 np0005574500.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key fingerprint is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: SHA256:Ibv0uzNtpLfxfxf0akamGBOBX59yWhXIWyyDa/7Z4aw root@np0005574500.novalocal
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key's randomart image is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |         .  o o..|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |        . ...= o.|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      . .. o..=o |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |       o .oo..=. |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      o S o. =. .|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     . o  +.. oo.|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      . .+.+.+=.o|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |        +.=o.o++o|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |        o*...Eo..|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key fingerprint is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: SHA256:zDE+fh+FO0KSJuRuF6XFfXQ+71PjSHBWrLmEc9jg8V4 root@np0005574500.novalocal
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key's randomart image is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |              o..|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |         . + ..+ |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      . o =.Oo+..|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     o + B =+X Eo|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      o S . *.+.o|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     . + =  .=o.o|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      o o o +. o.|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     . . . o o  .|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |            .    |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key fingerprint is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: SHA256:vj4n2KdlB11uU9guWVrg/vVt+qSggAW3ZKpFNZ8u7HM root@np0005574500.novalocal
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: The key's randomart image is:
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |        o     .  |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |       . o . . + |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      o + o   + =|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     . B o . + B |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      o S o . O o|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |     o = . . . ++|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |    . .o= E o   *|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |      . +Boo . = |
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: |       .+*.   o..|
Jan 05 13:44:43 np0005574500.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Reached target Network is Online.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting System Logging Service...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 05 13:44:44 np0005574500.novalocal sm-notify[1004]: Version 2.5.4 starting
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Permit User Sessions...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Finished Permit User Sessions.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started Command Scheduler.
Jan 05 13:44:44 np0005574500.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 05 13:44:44 np0005574500.novalocal sshd[1006]: Server listening on :: port 22.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started Getty on tty1.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 05 13:44:44 np0005574500.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Jan 05 13:44:44 np0005574500.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Reached target Login Prompts.
Jan 05 13:44:44 np0005574500.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 46% if used.)
Jan 05 13:44:44 np0005574500.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 05 13:44:44 np0005574500.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 05 13:44:44 np0005574500.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Started System Logging Service.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Reached target Multi-User System.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 05 13:44:44 np0005574500.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 13:44:44 np0005574500.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 05 13:44:44 np0005574500.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-654.el9.x86_64kdump.img
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1100]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 05 Jan 2026 13:44:44 +0000. Up 11.33 seconds.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1249]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 05 Jan 2026 13:44:44 +0000. Up 11.70 seconds.
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1268]: #############################################################
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1271]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 05 13:44:44 np0005574500.novalocal dracut[1270]: dracut-057-102.git20250818.el9
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1275]: 256 SHA256:zDE+fh+FO0KSJuRuF6XFfXQ+71PjSHBWrLmEc9jg8V4 root@np0005574500.novalocal (ECDSA)
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1282]: 256 SHA256:vj4n2KdlB11uU9guWVrg/vVt+qSggAW3ZKpFNZ8u7HM root@np0005574500.novalocal (ED25519)
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1292]: 3072 SHA256:Ibv0uzNtpLfxfxf0akamGBOBX59yWhXIWyyDa/7Z4aw root@np0005574500.novalocal (RSA)
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1293]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1294]: #############################################################
Jan 05 13:44:44 np0005574500.novalocal cloud-init[1249]: Cloud-init v. 24.4-8.el9 finished at Mon, 05 Jan 2026 13:44:44 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.90 seconds
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 05 13:44:44 np0005574500.novalocal systemd[1]: Reached target Cloud-init target.
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-654.el9.x86_64kdump.img 5.14.0-654.el9.x86_64
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 05 13:44:45 np0005574500.novalocal dracut[1274]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: memstrack is not available
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1772]: Connection reset by 38.102.83.114 port 53478 [preauth]
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1785]: Unable to negotiate with 38.102.83.114 port 53488: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1805]: Unable to negotiate with 38.102.83.114 port 53492: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1813]: Unable to negotiate with 38.102.83.114 port 53496: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1822]: Connection reset by 38.102.83.114 port 53510 [preauth]
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: memstrack is not available
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1794]: Connection closed by 38.102.83.114 port 53490 [preauth]
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1846]: Unable to negotiate with 38.102.83.114 port 53532: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1854]: Unable to negotiate with 38.102.83.114 port 53542: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 05 13:44:46 np0005574500.novalocal sshd-session[1836]: Connection closed by 38.102.83.114 port 53518 [preauth]
Jan 05 13:44:46 np0005574500.novalocal dracut[1274]: *** Including module: systemd ***
Jan 05 13:44:47 np0005574500.novalocal dracut[1274]: *** Including module: fips ***
Jan 05 13:44:47 np0005574500.novalocal chronyd[785]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 05 13:44:47 np0005574500.novalocal chronyd[785]: System clock TAI offset set to 37 seconds
Jan 05 13:44:47 np0005574500.novalocal dracut[1274]: *** Including module: systemd-initrd ***
Jan 05 13:44:47 np0005574500.novalocal dracut[1274]: *** Including module: i18n ***
Jan 05 13:44:47 np0005574500.novalocal dracut[1274]: *** Including module: drm ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: prefixdevname ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: kernel-modules ***
Jan 05 13:44:48 np0005574500.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: kernel-modules-extra ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: qemu ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: fstab-sys ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: rootfs-block ***
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: terminfo ***
Jan 05 13:44:48 np0005574500.novalocal chronyd[785]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Jan 05 13:44:48 np0005574500.novalocal dracut[1274]: *** Including module: udev-rules ***
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: Skipping udev rule: 91-permissions.rules
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: *** Including module: virtiofs ***
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: *** Including module: dracut-systemd ***
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: *** Including module: usrmount ***
Jan 05 13:44:49 np0005574500.novalocal dracut[1274]: *** Including module: base ***
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 25 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 31 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 28 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 32 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 30 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 05 13:44:50 np0005574500.novalocal irqbalance[782]: IRQ 29 affinity is now unmanaged
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]: *** Including module: fs-lib ***
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]: *** Including module: kdumpbase ***
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:   microcode_ctl module: mangling fw_dir
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel" is ignored
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 05 13:44:50 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]: *** Including module: openssl ***
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]: *** Including module: shutdown ***
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]: *** Including module: squash ***
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]: *** Including modules done ***
Jan 05 13:44:51 np0005574500.novalocal dracut[1274]: *** Installing kernel module dependencies ***
Jan 05 13:44:52 np0005574500.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 13:44:52 np0005574500.novalocal dracut[1274]: *** Installing kernel module dependencies done ***
Jan 05 13:44:52 np0005574500.novalocal dracut[1274]: *** Resolving executable dependencies ***
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: *** Resolving executable dependencies done ***
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: *** Generating early-microcode cpio image ***
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: *** Store current command line parameters ***
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: Stored kernel commandline:
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: No dracut internal kernel commandline stored in the initramfs
Jan 05 13:44:54 np0005574500.novalocal dracut[1274]: *** Install squash loader ***
Jan 05 13:44:55 np0005574500.novalocal dracut[1274]: *** Squashing the files inside the initramfs ***
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: *** Squashing the files inside the initramfs done ***
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: *** Creating image file '/boot/initramfs-5.14.0-654.el9.x86_64kdump.img' ***
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: *** Hardlinking files ***
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Mode:           real
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Files:          50
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Linked:         0 files
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Compared:       0 xattrs
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Compared:       0 files
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Saved:          0 B
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: Duration:       0.000476 seconds
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: *** Hardlinking files done ***
Jan 05 13:44:56 np0005574500.novalocal dracut[1274]: *** Creating initramfs image file '/boot/initramfs-5.14.0-654.el9.x86_64kdump.img' done ***
Jan 05 13:44:57 np0005574500.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 05 13:44:57 np0005574500.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 05 13:44:57 np0005574500.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 05 13:44:57 np0005574500.novalocal systemd[1]: Startup finished in 1.954s (kernel) + 2.944s (initrd) + 19.269s (userspace) = 24.168s.
Jan 05 13:45:01 np0005574500.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 39696 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 05 13:45:01 np0005574500.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 05 13:45:01 np0005574500.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 05 13:45:01 np0005574500.novalocal systemd-logind[795]: New session 1 of user zuul.
Jan 05 13:45:01 np0005574500.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 05 13:45:01 np0005574500.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 05 13:45:01 np0005574500.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Queued start job for default target Main User Target.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Created slice User Application Slice.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Reached target Paths.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Reached target Timers.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Reached target Sockets.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Reached target Basic System.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Reached target Main User Target.
Jan 05 13:45:02 np0005574500.novalocal systemd[4299]: Startup finished in 132ms.
Jan 05 13:45:02 np0005574500.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 05 13:45:02 np0005574500.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 05 13:45:02 np0005574500.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:45:02 np0005574500.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 13:45:05 np0005574500.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 13:45:10 np0005574500.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 13:45:11 np0005574500.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 13:45:11 np0005574500.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 05 13:45:13 np0005574500.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDoB5iNys0i1it2nOlR7wmz2sLd5YwI4BpaBlyK36afyquXMwBz/NGHa8KA55CgTRBg3Nmnkn4FI2lVqx7tUvACqwQYKpnQOLuL6O9CEJgDgeK6nV0uW3tbgZwffJhkjJzZjNDqp9Eg4Q5MMyiJ27rzNglK9PRAYv+UgF4FLfYw6j/+wCWVNxwa8sJRGAtoz3uIgsj2b3lVWAzVdka8TYTwnR1S2cYd1n52AB6a1umBs1k9qDJiznsGZhHN3u8C3ytHMx/gJP2XBMQMfHlR8Ot/DZNhbu0xW/u9ibpy5QTkco1UHCAi1b2n2umeYRixf72bHaocI79m3EjUl+NkD4pNdg2YnnGuZtQpw9Vq+KDlj/Q/5PRpSjSCtwm+jFscfiRroNs81lX1SGk5RjyaW7IYWm/0vuGCsu/rCIEYR2BxHIPocn53AYNWz8zwIMBi22gHa/Yy9sqxJYDIeVPtYWuErhwLtvFiuDGts3ZpIrfokULh1G+Wuzgmzjkhs3gogqs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:14 np0005574500.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:14 np0005574500.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:15 np0005574500.novalocal python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767620714.4432478-207-52890026466550/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9a36f7e413fe4811a5c80cd76805e3fb_id_rsa follow=False checksum=023b5b7287dfcaac4f6af3bdedd151bcd6cd299f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:15 np0005574500.novalocal python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:16 np0005574500.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767620715.3760514-240-75070258647150/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9a36f7e413fe4811a5c80cd76805e3fb_id_rsa.pub follow=False checksum=e100e49e86bc6ed389d739f6a1d871d12044a175 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:17 np0005574500.novalocal python3[4971]: ansible-ping Invoked with data=pong
Jan 05 13:45:18 np0005574500.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 13:45:20 np0005574500.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 05 13:45:21 np0005574500.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:22 np0005574500.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:22 np0005574500.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:23 np0005574500.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:23 np0005574500.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:23 np0005574500.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:24 np0005574500.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvjugkerdnbsmcjgxnghfnlzkjbpytnv ; /usr/bin/python3'
Jan 05 13:45:24 np0005574500.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:25 np0005574500.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:25 np0005574500.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:25 np0005574500.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuffnjkshvukorgurdbgzznpophfkwja ; /usr/bin/python3'
Jan 05 13:45:25 np0005574500.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:25 np0005574500.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:25 np0005574500.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:26 np0005574500.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iighbowtwxjqjzvwpyczpfdjhqecwpph ; /usr/bin/python3'
Jan 05 13:45:26 np0005574500.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:26 np0005574500.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767620725.2279177-21-84440164123184/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:26 np0005574500.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:26 np0005574500.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:27 np0005574500.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:27 np0005574500.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:27 np0005574500.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:27 np0005574500.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:28 np0005574500.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:28 np0005574500.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:28 np0005574500.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:29 np0005574500.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:29 np0005574500.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:29 np0005574500.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:29 np0005574500.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:30 np0005574500.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:30 np0005574500.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:30 np0005574500.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:30 np0005574500.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:31 np0005574500.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:31 np0005574500.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:31 np0005574500.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:32 np0005574500.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:32 np0005574500.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:32 np0005574500.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:33 np0005574500.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:33 np0005574500.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:33 np0005574500.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:34 np0005574500.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:45:36 np0005574500.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esgvtpzivbfwwiaotbvrqbidflvetivg ; /usr/bin/python3'
Jan 05 13:45:36 np0005574500.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:36 np0005574500.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 05 13:45:36 np0005574500.novalocal systemd[1]: Starting Time & Date Service...
Jan 05 13:45:36 np0005574500.novalocal systemd[1]: Started Time & Date Service.
Jan 05 13:45:36 np0005574500.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Jan 05 13:45:36 np0005574500.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:36 np0005574500.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmtdbjpojxokgrbtdcwghcdhssvsymlt ; /usr/bin/python3'
Jan 05 13:45:36 np0005574500.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:36 np0005574500.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:36 np0005574500.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:37 np0005574500.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:37 np0005574500.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1767620736.9255888-153-254734794961899/source _original_basename=tmp_xo8antk follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:38 np0005574500.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:38 np0005574500.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767620737.8012788-183-66425576828322/source _original_basename=tmp9wepzir8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:39 np0005574500.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlbfzdzsmrwnsctojsntneixrncywwt ; /usr/bin/python3'
Jan 05 13:45:39 np0005574500.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:39 np0005574500.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:39 np0005574500.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:39 np0005574500.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybicpezurdptkdhfonyzcwtrfcnmvysw ; /usr/bin/python3'
Jan 05 13:45:39 np0005574500.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:39 np0005574500.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767620738.864102-231-250274781688389/source _original_basename=tmpr4__lrl_ follow=False checksum=7ec61ca4f255a146f094e43c4266fbfc3e8ddff2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:39 np0005574500.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:40 np0005574500.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:45:40 np0005574500.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:45:40 np0005574500.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlkyotqdhgpjavsuilhmxujhuxohmkex ; /usr/bin/python3'
Jan 05 13:45:40 np0005574500.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:40 np0005574500.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:45:40 np0005574500.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:41 np0005574500.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfnxiyfzwkuoqfjsvffbcgfuqvhsougx ; /usr/bin/python3'
Jan 05 13:45:41 np0005574500.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:41 np0005574500.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1767620740.5001535-273-124194153030314/source _original_basename=tmpgu9sa7fo follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:41 np0005574500.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:41 np0005574500.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikutlbnvjidovegpgayzgxgaeslvykpo ; /usr/bin/python3'
Jan 05 13:45:41 np0005574500.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:45:41 np0005574500.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-0252-5340-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:45:41 np0005574500.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Jan 05 13:45:42 np0005574500.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-0252-5340-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 05 13:45:43 np0005574500.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:45:59 np0005574500.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvygldqnjcznfslaafetyttfdsgsiiun ; /usr/bin/python3'
Jan 05 13:45:59 np0005574500.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:46:00 np0005574500.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:46:00 np0005574500.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Jan 05 13:46:06 np0005574500.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 05 13:46:33 np0005574500.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 05 13:46:33 np0005574500.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.8911] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 13:46:33 np0005574500.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9176] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9215] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9219] device (eth1): carrier: link connected
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9223] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9233] policy: auto-activating connection 'Wired connection 1' (25bbf212-db31-38a8-8c4b-a6f883cb4430)
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9239] device (eth1): Activation: starting connection 'Wired connection 1' (25bbf212-db31-38a8-8c4b-a6f883cb4430)
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9240] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9244] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9251] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 13:46:33 np0005574500.novalocal NetworkManager[857]: <info>  [1767620793.9258] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:46:34 np0005574500.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-de25-ea77-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:46:41 np0005574500.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxkiovedzunuqzdvfyxpjvgjywheupr ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 13:46:41 np0005574500.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:46:41 np0005574500.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:46:41 np0005574500.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Jan 05 13:46:41 np0005574500.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smsxtcaasoobweudemcuncbrsezklobh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 13:46:41 np0005574500.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:46:42 np0005574500.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767620801.262906-102-225959372704907/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=430146679ec76d2b5a4a320af93ed1df7c503d34 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:46:42 np0005574500.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Jan 05 13:46:42 np0005574500.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlwffknqfiyyqsbdmphvzdpkzcixuket ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 13:46:42 np0005574500.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:46:42 np0005574500.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8703] caught SIGTERM, shutting down normally.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Stopping Network Manager...
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8714] dhcp4 (eth0): canceled DHCP transaction
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8715] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8715] dhcp4 (eth0): state changed no lease
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8718] manager: NetworkManager state is now CONNECTING
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8892] dhcp4 (eth1): canceled DHCP transaction
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8893] dhcp4 (eth1): state changed no lease
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[857]: <info>  [1767620802.8966] exiting (success)
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Stopped Network Manager.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Starting Network Manager...
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620802.9419] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:4a842e6d-ff22-4aef-a67c-1e6f6b9a395f)
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620802.9422] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 13:46:42 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620802.9474] manager[0x56093554c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 13:46:42 np0005574500.novalocal systemd[1]: Starting Hostname Service...
Jan 05 13:46:43 np0005574500.novalocal systemd[1]: Started Hostname Service.
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0641] hostname: hostname: using hostnamed
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0646] hostname: static hostname changed from (none) to "np0005574500.novalocal"
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0655] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0661] manager[0x56093554c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0662] manager[0x56093554c000]: rfkill: WWAN hardware radio set enabled
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0706] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0707] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0708] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0709] manager: Networking is enabled by state file
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0713] settings: Loaded settings plugin: keyfile (internal)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0719] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0759] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0774] dhcp: init: Using DHCP client 'internal'
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0778] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0786] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0794] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0807] device (lo): Activation: starting connection 'lo' (3df85b44-84fa-4707-aff2-a3490d11ca8e)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0817] device (eth0): carrier: link connected
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0824] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0831] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0831] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0841] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0851] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0864] device (eth1): carrier: link connected
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0870] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0877] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (25bbf212-db31-38a8-8c4b-a6f883cb4430) (indicated)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0878] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0886] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0899] device (eth1): Activation: starting connection 'Wired connection 1' (25bbf212-db31-38a8-8c4b-a6f883cb4430)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0909] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 13:46:43 np0005574500.novalocal systemd[1]: Started Network Manager.
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0927] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0934] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0940] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0945] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0952] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0958] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0965] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0973] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0987] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.0994] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1010] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1016] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1047] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1051] dhcp4 (eth0): state changed new lease, address=38.102.83.115
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1061] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1072] device (lo): Activation: successful, device activated.
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1093] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1193] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1249] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1254] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1263] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1269] device (eth0): Activation: successful, device activated.
Jan 05 13:46:43 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620803.1280] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 13:46:43 np0005574500.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Jan 05 13:46:43 np0005574500.novalocal python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-de25-ea77-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:46:53 np0005574500.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 13:47:13 np0005574500.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0051] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 13:47:28 np0005574500.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 13:47:28 np0005574500.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0432] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0437] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0457] device (eth1): Activation: successful, device activated.
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0468] manager: startup complete
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0474] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <warn>  [1767620848.0497] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0508] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0643] dhcp4 (eth1): canceled DHCP transaction
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0643] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0643] dhcp4 (eth1): state changed no lease
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0665] policy: auto-activating connection 'ci-private-network' (f6c38ead-36b2-5d84-9f47-323474c4e071)
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0672] device (eth1): Activation: starting connection 'ci-private-network' (f6c38ead-36b2-5d84-9f47-323474c4e071)
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0673] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0677] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0687] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0702] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0754] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0757] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 13:47:28 np0005574500.novalocal NetworkManager[7182]: <info>  [1767620848.0766] device (eth1): Activation: successful, device activated.
Jan 05 13:47:38 np0005574500.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 13:47:41 np0005574500.novalocal systemd[4299]: Starting Mark boot as successful...
Jan 05 13:47:41 np0005574500.novalocal systemd[4299]: Finished Mark boot as successful.
Jan 05 13:47:43 np0005574500.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 39696:11: disconnected by user
Jan 05 13:47:43 np0005574500.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 39696
Jan 05 13:47:43 np0005574500.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Jan 05 13:47:43 np0005574500.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Jan 05 13:47:52 np0005574500.novalocal sshd-session[7286]: Accepted publickey for zuul from 38.102.83.114 port 43650 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 13:47:52 np0005574500.novalocal systemd-logind[795]: New session 3 of user zuul.
Jan 05 13:47:52 np0005574500.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 05 13:47:52 np0005574500.novalocal sshd-session[7286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:47:52 np0005574500.novalocal sudo[7365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfcthdhgnmxweyhpvkcbgtkdmgbdgthp ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 13:47:52 np0005574500.novalocal sudo[7365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:47:52 np0005574500.novalocal python3[7367]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:47:52 np0005574500.novalocal sudo[7365]: pam_unix(sudo:session): session closed for user root
Jan 05 13:47:53 np0005574500.novalocal sudo[7438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbehoqncigxnxzskkygabzyqbmpdcdl ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 13:47:53 np0005574500.novalocal sudo[7438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:47:53 np0005574500.novalocal python3[7440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767620872.5710852-259-65965274635855/source _original_basename=tmp8zr52yiq follow=False checksum=4740b0f87eb2b412648e5f8d0cd4951fe59d79d7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:47:53 np0005574500.novalocal sudo[7438]: pam_unix(sudo:session): session closed for user root
Jan 05 13:47:55 np0005574500.novalocal sshd-session[7289]: Connection closed by 38.102.83.114 port 43650
Jan 05 13:47:55 np0005574500.novalocal sshd-session[7286]: pam_unix(sshd:session): session closed for user zuul
Jan 05 13:47:55 np0005574500.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 05 13:47:55 np0005574500.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Jan 05 13:47:55 np0005574500.novalocal systemd-logind[795]: Removed session 3.
Jan 05 13:49:15 np0005574500.novalocal sshd-session[7466]: Connection closed by 172.105.102.10 port 42876
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7467]: Connection closed by 172.105.102.10 port 45982
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7473]: Connection closed by 172.105.102.10 port 46018
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7469]: error: Protocol major versions differ: 2 vs. 1
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7470]: error: Protocol major versions differ: 2 vs. 1
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7469]: banner exchange: Connection from 172.105.102.10 port 46012: could not read protocol version
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7470]: banner exchange: Connection from 172.105.102.10 port 45998: could not read protocol version
Jan 05 13:49:22 np0005574500.novalocal sshd-session[7471]: Unable to negotiate with 172.105.102.10 port 46004: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 [preauth]
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7468]: Invalid user rklzu from 172.105.102.10 port 45986
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7468]: Connection closed by invalid user rklzu 172.105.102.10 port 45986 [preauth]
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7477]: Unable to negotiate with 172.105.102.10 port 46030: no matching host key type found. Their offer: ssh-dss [preauth]
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7479]: Unable to negotiate with 172.105.102.10 port 46036: no matching host key type found. Their offer: ssh-rsa [preauth]
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7481]: Unable to negotiate with 172.105.102.10 port 46050: no matching MAC found. Their offer: hmac-md5,hmac-sha1,hmac-ripemd160 [preauth]
Jan 05 13:49:23 np0005574500.novalocal sshd-session[7483]: Unable to negotiate with 172.105.102.10 port 46054: no matching host key type found. Their offer: ecdsa-sha2-nistp384 [preauth]
Jan 05 13:49:24 np0005574500.novalocal sshd-session[7485]: Unable to negotiate with 172.105.102.10 port 46072: no matching host key type found. Their offer: ecdsa-sha2-nistp521 [preauth]
Jan 05 13:49:24 np0005574500.novalocal sshd-session[7487]: Unable to negotiate with 172.105.102.10 port 46088: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 05 13:49:24 np0005574500.novalocal sshd-session[7472]: Connection closed by 172.105.102.10 port 46014 [preauth]
Jan 05 13:50:41 np0005574500.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Jan 05 13:50:41 np0005574500.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Jan 05 13:50:41 np0005574500.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Jan 05 13:51:50 np0005574500.novalocal sshd-session[7494]: Accepted publickey for zuul from 38.102.83.114 port 33072 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 13:51:50 np0005574500.novalocal systemd-logind[795]: New session 4 of user zuul.
Jan 05 13:51:50 np0005574500.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 05 13:51:50 np0005574500.novalocal sshd-session[7494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:51:50 np0005574500.novalocal sudo[7521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ealzgpmsjnlcszokgajddxmjxrpxcysk ; /usr/bin/python3'
Jan 05 13:51:50 np0005574500.novalocal sudo[7521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:50 np0005574500.novalocal python3[7523]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-a9e2-31cf-00000000216b-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:50 np0005574500.novalocal sudo[7521]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:50 np0005574500.novalocal sudo[7549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwxnbjmrocgmmgnrqnkvnmaxlgbjbgh ; /usr/bin/python3'
Jan 05 13:51:50 np0005574500.novalocal sudo[7549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:50 np0005574500.novalocal python3[7551]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:50 np0005574500.novalocal sudo[7549]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:50 np0005574500.novalocal sudo[7575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-almozagevdyzojjwpdysvpkdygjvlqce ; /usr/bin/python3'
Jan 05 13:51:50 np0005574500.novalocal sudo[7575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:51 np0005574500.novalocal python3[7578]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:51 np0005574500.novalocal sudo[7575]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:51 np0005574500.novalocal sudo[7602]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfnyfnhtcqxrjmihairdbqykxmvhxzpa ; /usr/bin/python3'
Jan 05 13:51:51 np0005574500.novalocal sudo[7602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:51 np0005574500.novalocal python3[7604]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:51 np0005574500.novalocal sudo[7602]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:51 np0005574500.novalocal sudo[7628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqzidxlkfvlgcwfpjiycekzyayipfiue ; /usr/bin/python3'
Jan 05 13:51:51 np0005574500.novalocal sudo[7628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:51 np0005574500.novalocal python3[7630]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:51 np0005574500.novalocal sudo[7628]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:51 np0005574500.novalocal sudo[7654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhjhssqsbvkbmxqfuqavmvqnkllelkl ; /usr/bin/python3'
Jan 05 13:51:51 np0005574500.novalocal sudo[7654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:52 np0005574500.novalocal python3[7656]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:52 np0005574500.novalocal sudo[7654]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:52 np0005574500.novalocal sudo[7732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlyrktllkcctrdotvbrmzwifgtjljpuu ; /usr/bin/python3'
Jan 05 13:51:52 np0005574500.novalocal sudo[7732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:52 np0005574500.novalocal python3[7734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:51:52 np0005574500.novalocal sudo[7732]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:52 np0005574500.novalocal sudo[7805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjzyzfoenitppmsjjvhjpljtpdslbus ; /usr/bin/python3'
Jan 05 13:51:52 np0005574500.novalocal sudo[7805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:52 np0005574500.novalocal python3[7807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767621112.2859228-506-188221096594441/source _original_basename=tmpswgatmod follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:51:52 np0005574500.novalocal sudo[7805]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:53 np0005574500.novalocal sudo[7855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlepiencjimvapxekrtqumzeqracywhq ; /usr/bin/python3'
Jan 05 13:51:53 np0005574500.novalocal sudo[7855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:53 np0005574500.novalocal python3[7857]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 13:51:53 np0005574500.novalocal systemd[1]: Reloading.
Jan 05 13:51:53 np0005574500.novalocal systemd-rc-local-generator[7875]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 13:51:54 np0005574500.novalocal sudo[7855]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:55 np0005574500.novalocal sudo[7911]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggkknawlntxwybifyidygpwhmfgtqdto ; /usr/bin/python3'
Jan 05 13:51:55 np0005574500.novalocal sudo[7911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:55 np0005574500.novalocal python3[7913]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 05 13:51:55 np0005574500.novalocal sudo[7911]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:55 np0005574500.novalocal sudo[7937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbzndhkyeunreqhpgqisljyuuvoocfpq ; /usr/bin/python3'
Jan 05 13:51:55 np0005574500.novalocal sudo[7937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:55 np0005574500.novalocal python3[7939]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:56 np0005574500.novalocal sudo[7937]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:56 np0005574500.novalocal sudo[7965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tszsrasrgxlefwfticvgljrlxjbcdzjv ; /usr/bin/python3'
Jan 05 13:51:56 np0005574500.novalocal sudo[7965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:56 np0005574500.novalocal python3[7967]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:56 np0005574500.novalocal sudo[7965]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:56 np0005574500.novalocal sudo[7993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlamclqhoiesmhplqdsdbbwfsonshhwo ; /usr/bin/python3'
Jan 05 13:51:56 np0005574500.novalocal sudo[7993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:56 np0005574500.novalocal python3[7995]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:56 np0005574500.novalocal sudo[7993]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:56 np0005574500.novalocal sudo[8021]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byxbfamlraogqlvqqgjbqwdsasbcegrx ; /usr/bin/python3'
Jan 05 13:51:56 np0005574500.novalocal sudo[8021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:51:56 np0005574500.novalocal python3[8023]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:56 np0005574500.novalocal sudo[8021]: pam_unix(sudo:session): session closed for user root
Jan 05 13:51:57 np0005574500.novalocal python3[8050]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-a9e2-31cf-000000002172-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:51:58 np0005574500.novalocal python3[8080]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 05 13:52:00 np0005574500.novalocal sshd-session[7497]: Connection closed by 38.102.83.114 port 33072
Jan 05 13:52:00 np0005574500.novalocal sshd-session[7494]: pam_unix(sshd:session): session closed for user zuul
Jan 05 13:52:00 np0005574500.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 05 13:52:00 np0005574500.novalocal systemd[1]: session-4.scope: Consumed 4.332s CPU time.
Jan 05 13:52:00 np0005574500.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Jan 05 13:52:00 np0005574500.novalocal systemd-logind[795]: Removed session 4.
Jan 05 13:52:01 np0005574500.novalocal sshd-session[8085]: Accepted publickey for zuul from 38.102.83.114 port 44670 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 13:52:01 np0005574500.novalocal systemd-logind[795]: New session 5 of user zuul.
Jan 05 13:52:01 np0005574500.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 05 13:52:01 np0005574500.novalocal sshd-session[8085]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:52:01 np0005574500.novalocal sudo[8112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqzdkecginrgjzzftngwlqsykbnytlg ; /usr/bin/python3'
Jan 05 13:52:01 np0005574500.novalocal sudo[8112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:52:02 np0005574500.novalocal python3[8114]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  Converting 384 SID table entries...
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 13:52:14 np0005574500.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  Converting 384 SID table entries...
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 13:52:19 np0005574500.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  Converting 384 SID table entries...
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 13:52:28 np0005574500.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 13:52:29 np0005574500.novalocal setsebool[8173]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 05 13:52:29 np0005574500.novalocal setsebool[8173]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  Converting 387 SID table entries...
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 13:52:40 np0005574500.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 13:52:57 np0005574500.novalocal dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 05 13:52:57 np0005574500.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 13:52:57 np0005574500.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 05 13:52:57 np0005574500.novalocal systemd[1]: Reloading.
Jan 05 13:52:57 np0005574500.novalocal systemd-rc-local-generator[8913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 13:52:58 np0005574500.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 13:52:59 np0005574500.novalocal sudo[8112]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:10 np0005574500.novalocal python3[16019]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-0635-fcd8-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 13:53:11 np0005574500.novalocal kernel: evm: overlay not supported
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Jan 05 13:53:11 np0005574500.novalocal dbus-broker-launch[16455]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 05 13:53:11 np0005574500.novalocal dbus-broker-launch[16455]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: Started D-Bus User Message Bus.
Jan 05 13:53:11 np0005574500.novalocal dbus-broker-lau[16455]: Ready
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: Created slice Slice /user.
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: podman-16378.scope: unit configures an IP firewall, but not running as root.
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Jan 05 13:53:11 np0005574500.novalocal systemd[4299]: Started podman-16378.scope.
Jan 05 13:53:12 np0005574500.novalocal systemd[4299]: Started podman-pause-4e6505a2.scope.
Jan 05 13:53:12 np0005574500.novalocal sudo[16783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhpqdvcqrgveifwrwwpagrhuvuttknp ; /usr/bin/python3'
Jan 05 13:53:12 np0005574500.novalocal sudo[16783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:12 np0005574500.novalocal python3[16798]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.5:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.5:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:53:12 np0005574500.novalocal python3[16798]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 05 13:53:12 np0005574500.novalocal sudo[16783]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:13 np0005574500.novalocal sshd-session[8088]: Connection closed by 38.102.83.114 port 44670
Jan 05 13:53:13 np0005574500.novalocal sshd-session[8085]: pam_unix(sshd:session): session closed for user zuul
Jan 05 13:53:13 np0005574500.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 05 13:53:13 np0005574500.novalocal systemd[1]: session-5.scope: Consumed 55.138s CPU time.
Jan 05 13:53:13 np0005574500.novalocal systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Jan 05 13:53:13 np0005574500.novalocal systemd-logind[795]: Removed session 5.
Jan 05 13:53:30 np0005574500.novalocal irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 05 13:53:30 np0005574500.novalocal irqbalance[782]: IRQ 27 affinity is now unmanaged
Jan 05 13:53:34 np0005574500.novalocal sshd-session[24401]: Unable to negotiate with 38.102.83.65 port 48156: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 05 13:53:34 np0005574500.novalocal sshd-session[24405]: Connection closed by 38.102.83.65 port 48126 [preauth]
Jan 05 13:53:34 np0005574500.novalocal sshd-session[24399]: Connection closed by 38.102.83.65 port 48136 [preauth]
Jan 05 13:53:34 np0005574500.novalocal sshd-session[24407]: Unable to negotiate with 38.102.83.65 port 48144: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 05 13:53:34 np0005574500.novalocal sshd-session[24402]: Unable to negotiate with 38.102.83.65 port 48162: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 05 13:53:38 np0005574500.novalocal sshd-session[26286]: Accepted publickey for zuul from 38.102.83.114 port 32846 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 13:53:38 np0005574500.novalocal systemd-logind[795]: New session 6 of user zuul.
Jan 05 13:53:39 np0005574500.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 05 13:53:39 np0005574500.novalocal sshd-session[26286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:53:39 np0005574500.novalocal python3[26392]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEzLZcUOERcKBJe0A9H8CikK/PQP6kS1XxgxwgcDlcFgKHUxnc9vdPGrq1JrBNS3gnxeaWfz3J+meDkHEI47Y3Q= zuul@np0005574499.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:53:39 np0005574500.novalocal sudo[26595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xipudemozykwaxganaayrxqdkkldmjoz ; /usr/bin/python3'
Jan 05 13:53:39 np0005574500.novalocal sudo[26595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:39 np0005574500.novalocal python3[26608]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEzLZcUOERcKBJe0A9H8CikK/PQP6kS1XxgxwgcDlcFgKHUxnc9vdPGrq1JrBNS3gnxeaWfz3J+meDkHEI47Y3Q= zuul@np0005574499.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:53:39 np0005574500.novalocal sudo[26595]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:40 np0005574500.novalocal sudo[26961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxgxldqkaisroiskmdhnvkziuxlrwhdh ; /usr/bin/python3'
Jan 05 13:53:40 np0005574500.novalocal sudo[26961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:40 np0005574500.novalocal python3[26973]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005574500.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 05 13:53:40 np0005574500.novalocal useradd[27081]: new group: name=cloud-admin, GID=1002
Jan 05 13:53:40 np0005574500.novalocal useradd[27081]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 05 13:53:40 np0005574500.novalocal sudo[26961]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:41 np0005574500.novalocal sudo[27208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szpdtfgzirsrfxisypoabsxhzxokwvrb ; /usr/bin/python3'
Jan 05 13:53:41 np0005574500.novalocal sudo[27208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:41 np0005574500.novalocal python3[27222]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEzLZcUOERcKBJe0A9H8CikK/PQP6kS1XxgxwgcDlcFgKHUxnc9vdPGrq1JrBNS3gnxeaWfz3J+meDkHEI47Y3Q= zuul@np0005574499.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 13:53:41 np0005574500.novalocal sudo[27208]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:41 np0005574500.novalocal sudo[27465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knbiupqpqjyqgouvpividlggbfxlnszl ; /usr/bin/python3'
Jan 05 13:53:41 np0005574500.novalocal sudo[27465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:41 np0005574500.novalocal python3[27472]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:53:41 np0005574500.novalocal sudo[27465]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:41 np0005574500.novalocal sudo[27708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnjkklqnsvndvaovlsqzgnwjbespujky ; /usr/bin/python3'
Jan 05 13:53:41 np0005574500.novalocal sudo[27708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:42 np0005574500.novalocal python3[27722]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767621221.3396573-135-49169070269607/source _original_basename=tmpriim_zb7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:53:42 np0005574500.novalocal sudo[27708]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:42 np0005574500.novalocal sudo[28002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfbkjcowymgpjpsitpcyfgpaanbzghzs ; /usr/bin/python3'
Jan 05 13:53:42 np0005574500.novalocal sudo[28002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:53:42 np0005574500.novalocal python3[28009]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 05 13:53:42 np0005574500.novalocal systemd[1]: Starting Hostname Service...
Jan 05 13:53:43 np0005574500.novalocal systemd[1]: Started Hostname Service.
Jan 05 13:53:43 np0005574500.novalocal systemd-hostnamed[28110]: Changed pretty hostname to 'compute-0'
Jan 05 13:53:43 compute-0 systemd-hostnamed[28110]: Hostname set to <compute-0> (static)
Jan 05 13:53:43 compute-0 NetworkManager[7182]: <info>  [1767621223.0821] hostname: static hostname changed from "np0005574500.novalocal" to "compute-0"
Jan 05 13:53:43 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 13:53:43 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 13:53:43 compute-0 sudo[28002]: pam_unix(sudo:session): session closed for user root
Jan 05 13:53:43 compute-0 sshd-session[26329]: Connection closed by 38.102.83.114 port 32846
Jan 05 13:53:43 compute-0 sshd-session[26286]: pam_unix(sshd:session): session closed for user zuul
Jan 05 13:53:43 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 05 13:53:43 compute-0 systemd[1]: session-6.scope: Consumed 2.392s CPU time.
Jan 05 13:53:43 compute-0 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Jan 05 13:53:43 compute-0 systemd-logind[795]: Removed session 6.
Jan 05 13:53:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 13:53:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 13:53:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 1.012s CPU time.
Jan 05 13:53:51 compute-0 systemd[1]: run-r0f88aae0d6a44eb9998d3372c23f58d1.service: Deactivated successfully.
Jan 05 13:53:53 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 13:53:58 compute-0 sshd-session[29905]: Received disconnect from 193.46.255.244 port 48022:11:  [preauth]
Jan 05 13:53:58 compute-0 sshd-session[29905]: Disconnected from authenticating user root 193.46.255.244 port 48022 [preauth]
Jan 05 13:54:13 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 13:57:30 compute-0 sshd-session[29913]: Received disconnect from 193.46.255.217 port 42652:11:  [preauth]
Jan 05 13:57:30 compute-0 sshd-session[29913]: Disconnected from authenticating user root 193.46.255.217 port 42652 [preauth]
Jan 05 13:58:27 compute-0 sshd-session[29916]: Accepted publickey for zuul from 38.102.83.65 port 58342 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 13:58:27 compute-0 systemd-logind[795]: New session 7 of user zuul.
Jan 05 13:58:27 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 05 13:58:27 compute-0 sshd-session[29916]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 13:58:28 compute-0 python3[29992]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 13:58:29 compute-0 sudo[30106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpkfnbjbntmrixnbonajzrehypvvpvko ; /usr/bin/python3'
Jan 05 13:58:29 compute-0 sudo[30106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:29 compute-0 python3[30108]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:29 compute-0 sudo[30106]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:30 compute-0 sudo[30179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uistrpqkxbnwzsllurvrtawwelzpctep ; /usr/bin/python3'
Jan 05 13:58:30 compute-0 sudo[30179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:30 compute-0 python3[30181]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:30 compute-0 sudo[30179]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:30 compute-0 sudo[30205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eczbbpqbfzbkwdjyznkwpoyforltihit ; /usr/bin/python3'
Jan 05 13:58:30 compute-0 sudo[30205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:30 compute-0 python3[30207]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:30 compute-0 sudo[30205]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:30 compute-0 sudo[30278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwtlcyltkshcrgqzmmhmzqapyzrojuhw ; /usr/bin/python3'
Jan 05 13:58:30 compute-0 sudo[30278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:30 compute-0 python3[30280]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:30 compute-0 sudo[30278]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:31 compute-0 sudo[30304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnutbftclrnnqvjteossbjtzjkchnkeb ; /usr/bin/python3'
Jan 05 13:58:31 compute-0 sudo[30304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:31 compute-0 python3[30306]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:31 compute-0 sudo[30304]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:31 compute-0 sudo[30377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ondvxicxkylimqdxeeywlsqswqurgunf ; /usr/bin/python3'
Jan 05 13:58:31 compute-0 sudo[30377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:31 compute-0 python3[30379]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:31 compute-0 sudo[30377]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:31 compute-0 sudo[30403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtqapahwlhxpjkxydzlkcrkpflmznam ; /usr/bin/python3'
Jan 05 13:58:31 compute-0 sudo[30403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:31 compute-0 python3[30405]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:31 compute-0 sudo[30403]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:32 compute-0 sudo[30476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzmsvxdelguhesipknlwkvowutzfawai ; /usr/bin/python3'
Jan 05 13:58:32 compute-0 sudo[30476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:32 compute-0 python3[30478]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:32 compute-0 sudo[30476]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:32 compute-0 sudo[30502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trihmlrpaoswagghhjbcheasuezhiuth ; /usr/bin/python3'
Jan 05 13:58:32 compute-0 sudo[30502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:32 compute-0 python3[30504]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:32 compute-0 sudo[30502]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:32 compute-0 sudo[30575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfieryhicootylchfihckwkvhdyptejt ; /usr/bin/python3'
Jan 05 13:58:32 compute-0 sudo[30575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:33 compute-0 python3[30577]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:33 compute-0 sudo[30575]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:33 compute-0 sudo[30601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjoiartytckdrmkgommeqoksgwymjbed ; /usr/bin/python3'
Jan 05 13:58:33 compute-0 sudo[30601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:33 compute-0 python3[30603]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:33 compute-0 sudo[30601]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:33 compute-0 sudo[30674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reqgxdwaebidawrsfwniinkxlisymrgs ; /usr/bin/python3'
Jan 05 13:58:33 compute-0 sudo[30674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:33 compute-0 python3[30676]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:33 compute-0 sudo[30674]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:33 compute-0 sudo[30700]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qngiqnopyqgeunyqwymavtlpihnhkvzx ; /usr/bin/python3'
Jan 05 13:58:33 compute-0 sudo[30700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:34 compute-0 python3[30702]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 13:58:34 compute-0 sudo[30700]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:34 compute-0 sudo[30773]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcapolgliwfmywuzqdjyxqsinrvicmb ; /usr/bin/python3'
Jan 05 13:58:34 compute-0 sudo[30773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 13:58:34 compute-0 python3[30775]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767621509.406832-33611-190794320272325/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 13:58:34 compute-0 sudo[30773]: pam_unix(sudo:session): session closed for user root
Jan 05 13:58:37 compute-0 sshd-session[30800]: Unable to negotiate with 192.168.122.11 port 37468: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 05 13:58:37 compute-0 sshd-session[30801]: Connection closed by 192.168.122.11 port 37446 [preauth]
Jan 05 13:58:37 compute-0 sshd-session[30803]: Connection closed by 192.168.122.11 port 37462 [preauth]
Jan 05 13:58:37 compute-0 sshd-session[30802]: Unable to negotiate with 192.168.122.11 port 37482: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 05 13:58:37 compute-0 sshd-session[30804]: Unable to negotiate with 192.168.122.11 port 37490: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 05 13:59:41 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 05 13:59:41 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 05 13:59:41 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 05 13:59:41 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 05 14:00:34 compute-0 sshd-session[30813]: Connection closed by 101.126.13.40 port 44626
Jan 05 14:00:43 compute-0 sshd-session[30814]: Connection closed by authenticating user root 101.126.13.40 port 44640 [preauth]
Jan 05 14:01:01 compute-0 CROND[30817]: (root) CMD (run-parts /etc/cron.hourly)
Jan 05 14:01:01 compute-0 run-parts[30820]: (/etc/cron.hourly) starting 0anacron
Jan 05 14:01:01 compute-0 anacron[30828]: Anacron started on 2026-01-05
Jan 05 14:01:01 compute-0 anacron[30828]: Will run job `cron.daily' in 31 min.
Jan 05 14:01:01 compute-0 anacron[30828]: Will run job `cron.weekly' in 51 min.
Jan 05 14:01:01 compute-0 anacron[30828]: Will run job `cron.monthly' in 71 min.
Jan 05 14:01:01 compute-0 anacron[30828]: Jobs will be executed sequentially
Jan 05 14:01:01 compute-0 run-parts[30830]: (/etc/cron.hourly) finished 0anacron
Jan 05 14:01:01 compute-0 CROND[30816]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 05 14:01:36 compute-0 python3[30855]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:04:16 compute-0 sshd-session[30858]: Connection closed by 103.203.57.11 port 49212
Jan 05 14:06:36 compute-0 sshd-session[29919]: Received disconnect from 38.102.83.65 port 58342:11: disconnected by user
Jan 05 14:06:36 compute-0 sshd-session[29919]: Disconnected from user zuul 38.102.83.65 port 58342
Jan 05 14:06:36 compute-0 sshd-session[29916]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:06:36 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 05 14:06:36 compute-0 systemd[1]: session-7.scope: Consumed 5.803s CPU time.
Jan 05 14:06:36 compute-0 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Jan 05 14:06:36 compute-0 systemd-logind[795]: Removed session 7.
Jan 05 14:08:29 compute-0 sshd-session[30861]: Received disconnect from 80.94.93.119 port 50454:11:  [preauth]
Jan 05 14:08:29 compute-0 sshd-session[30861]: Disconnected from authenticating user root 80.94.93.119 port 50454 [preauth]
Jan 05 14:12:23 compute-0 sshd-session[30865]: Received disconnect from 193.46.255.33 port 45828:11:  [preauth]
Jan 05 14:12:23 compute-0 sshd-session[30865]: Disconnected from authenticating user root 193.46.255.33 port 45828 [preauth]
Jan 05 14:14:21 compute-0 sshd-session[30867]: Accepted publickey for zuul from 192.168.122.30 port 43198 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:14:21 compute-0 systemd-logind[795]: New session 8 of user zuul.
Jan 05 14:14:21 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 05 14:14:21 compute-0 sshd-session[30867]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:14:22 compute-0 python3.9[31020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:14:23 compute-0 sudo[31199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqsnyhevswkisspstbipibdcrppyzlzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622463.037225-32-164214484120147/AnsiballZ_command.py'
Jan 05 14:14:23 compute-0 sudo[31199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:14:23 compute-0 python3.9[31201]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:14:30 compute-0 sudo[31199]: pam_unix(sudo:session): session closed for user root
Jan 05 14:14:31 compute-0 sshd-session[30870]: Connection closed by 192.168.122.30 port 43198
Jan 05 14:14:31 compute-0 sshd-session[30867]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:14:31 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 05 14:14:31 compute-0 systemd[1]: session-8.scope: Consumed 8.155s CPU time.
Jan 05 14:14:31 compute-0 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Jan 05 14:14:31 compute-0 systemd-logind[795]: Removed session 8.
Jan 05 14:14:38 compute-0 sshd-session[31258]: Accepted publickey for zuul from 192.168.122.30 port 60304 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:14:38 compute-0 systemd-logind[795]: New session 9 of user zuul.
Jan 05 14:14:38 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 05 14:14:38 compute-0 sshd-session[31258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:14:39 compute-0 python3.9[31411]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:14:40 compute-0 sshd-session[31261]: Connection closed by 192.168.122.30 port 60304
Jan 05 14:14:40 compute-0 sshd-session[31258]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:14:40 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 05 14:14:40 compute-0 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Jan 05 14:14:40 compute-0 systemd-logind[795]: Removed session 9.
Jan 05 14:14:57 compute-0 sshd-session[31440]: Accepted publickey for zuul from 192.168.122.30 port 37456 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:14:57 compute-0 systemd-logind[795]: New session 10 of user zuul.
Jan 05 14:14:57 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 05 14:14:57 compute-0 sshd-session[31440]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:14:58 compute-0 python3.9[31593]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 05 14:14:59 compute-0 python3.9[31767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:15:00 compute-0 sudo[31917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcbiumprkwgmczglfbpszfznvcbhneoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622499.6160069-45-231900512821132/AnsiballZ_command.py'
Jan 05 14:15:00 compute-0 sudo[31917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:00 compute-0 python3.9[31919]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:15:00 compute-0 sudo[31917]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:01 compute-0 sudo[32070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnodmvjnyjwpaycenxrzsmrblzofujfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622500.6108024-57-100731819495915/AnsiballZ_stat.py'
Jan 05 14:15:01 compute-0 sudo[32070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:01 compute-0 python3.9[32072]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:15:01 compute-0 sudo[32070]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:01 compute-0 sudo[32222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgmaoxflldxtubswnruqnnluskprdgpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622501.4775264-65-207088835247666/AnsiballZ_file.py'
Jan 05 14:15:01 compute-0 sudo[32222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:02 compute-0 python3.9[32224]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:15:02 compute-0 sudo[32222]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:02 compute-0 sudo[32374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfuiqeksvojoimeyyvzqqdyeuodwvgxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622502.4801955-73-104293513452612/AnsiballZ_stat.py'
Jan 05 14:15:02 compute-0 sudo[32374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:03 compute-0 python3.9[32376]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:15:03 compute-0 sudo[32374]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:03 compute-0 sudo[32497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkopmnhguxjvnmuisxnyibintlwwokrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622502.4801955-73-104293513452612/AnsiballZ_copy.py'
Jan 05 14:15:03 compute-0 sudo[32497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:03 compute-0 python3.9[32499]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622502.4801955-73-104293513452612/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:15:03 compute-0 sudo[32497]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:04 compute-0 sudo[32649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjxwfhheoaedfuezcgnrmjqbmwnpvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622504.068201-88-264750894351276/AnsiballZ_setup.py'
Jan 05 14:15:04 compute-0 sudo[32649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:04 compute-0 python3.9[32651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:15:04 compute-0 sudo[32649]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:05 compute-0 sudo[32805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxvnhlrdwnsdwzxconoznareudlbzhlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622505.0958667-96-145433625394811/AnsiballZ_file.py'
Jan 05 14:15:05 compute-0 sudo[32805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:05 compute-0 python3.9[32807]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:15:05 compute-0 sudo[32805]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:06 compute-0 sudo[32957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzxrdsuruwdvnezpdalfdxlyymdiaxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622505.9646773-105-203567770677444/AnsiballZ_file.py'
Jan 05 14:15:06 compute-0 sudo[32957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:06 compute-0 python3.9[32959]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:15:06 compute-0 sudo[32957]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:07 compute-0 python3.9[33109]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:15:12 compute-0 python3.9[33362]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:15:13 compute-0 python3.9[33512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:15:14 compute-0 python3.9[33666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:15:15 compute-0 sudo[33822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phichgzuwcbzephfmuesbopnfoaaagqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622515.380805-153-166619780112251/AnsiballZ_setup.py'
Jan 05 14:15:15 compute-0 sudo[33822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:16 compute-0 python3.9[33824]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:15:16 compute-0 sudo[33822]: pam_unix(sudo:session): session closed for user root
Jan 05 14:15:16 compute-0 sudo[33906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-marwnzfqwqzpwhmtkpyfqkmqgnyvhkke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622515.380805-153-166619780112251/AnsiballZ_dnf.py'
Jan 05 14:15:16 compute-0 sudo[33906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:15:16 compute-0 python3.9[33908]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:16:01 compute-0 systemd[1]: Reloading.
Jan 05 14:16:01 compute-0 systemd-rc-local-generator[34106]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:16:01 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 05 14:16:02 compute-0 systemd[1]: Reloading.
Jan 05 14:16:02 compute-0 systemd-rc-local-generator[34143]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:16:02 compute-0 systemd[1]: Starting dnf makecache...
Jan 05 14:16:02 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 05 14:16:02 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 05 14:16:02 compute-0 systemd[1]: Reloading.
Jan 05 14:16:02 compute-0 dnf[34158]: Failed determining last makecache time.
Jan 05 14:16:02 compute-0 systemd-rc-local-generator[34190]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-barbican-42b4c41831408a8e323 134 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 153 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-cinder-1c00d6490d88e436f26ef 141 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-stevedore-c4acc5639fd2329372142 188 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-cloudkitty-tests-tempest-2c80f8 166 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-os-refresh-config-9bfc52b5049be2d8de61 178 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 148 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-designate-tests-tempest-347fdbc 107 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-glance-1fd12c29b339f30fe823e 136 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 165 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-manila-3c01b7181572c95dac462 159 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-whitebox-neutron-tests-tempest- 166 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-octavia-ba397f07a7331190208c 160 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-watcher-c014f81a8647287f6dcc 172 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-ansible-config_template-5ccaa22121a7ff 171 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:16:02 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 173 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-swift-dc98a8463506ac520c469a 158 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-python-tempestconf-8515371b7cceebd4282 159 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: delorean-openstack-heat-ui-013accbfd179753bc3f0 152 kB/s | 3.0 kB     00:00
Jan 05 14:16:02 compute-0 dnf[34158]: CentOS Stream 9 - BaseOS                         62 kB/s | 6.7 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: CentOS Stream 9 - AppStream                      65 kB/s | 6.8 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: CentOS Stream 9 - CRB                            26 kB/s | 6.6 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: CentOS Stream 9 - Extras packages                73 kB/s | 7.3 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: dlrn-antelope-testing                           157 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: dlrn-antelope-build-deps                        164 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: centos9-rabbitmq                                134 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: centos9-storage                                 134 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: centos9-opstools                                113 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: NFV SIG OpenvSwitch                             127 kB/s | 3.0 kB     00:00
Jan 05 14:16:03 compute-0 dnf[34158]: repo-setup-centos-appstream                     187 kB/s | 4.4 kB     00:00
Jan 05 14:16:04 compute-0 dnf[34158]: repo-setup-centos-baseos                        166 kB/s | 3.9 kB     00:00
Jan 05 14:16:04 compute-0 dnf[34158]: repo-setup-centos-highavailability               50 kB/s | 3.9 kB     00:00
Jan 05 14:16:04 compute-0 dnf[34158]: repo-setup-centos-powertools                    195 kB/s | 4.3 kB     00:00
Jan 05 14:16:04 compute-0 dnf[34158]: Extra Packages for Enterprise Linux 9 - x86_64  105 kB/s |  31 kB     00:00
Jan 05 14:16:05 compute-0 dnf[34158]: Metadata cache created.
Jan 05 14:16:05 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 05 14:16:05 compute-0 systemd[1]: Finished dnf makecache.
Jan 05 14:16:05 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.916s CPU time.
Jan 05 14:17:05 compute-0 kernel: SELinux:  Converting 2716 SID table entries...
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:17:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:17:05 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 05 14:17:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:17:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:17:06 compute-0 systemd[1]: Reloading.
Jan 05 14:17:06 compute-0 systemd-rc-local-generator[34550]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:17:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:17:06 compute-0 sudo[33906]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:07 compute-0 sudo[35460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juxmhohyryaalmdsxkfawouusceedycw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622627.0621557-165-63721599769197/AnsiballZ_command.py'
Jan 05 14:17:07 compute-0 sudo[35460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:17:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:17:07 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.431s CPU time.
Jan 05 14:17:07 compute-0 systemd[1]: run-r6d45d52006a44c9dbb692f44e615d28b.service: Deactivated successfully.
Jan 05 14:17:07 compute-0 python3.9[35462]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:08 compute-0 sudo[35460]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:09 compute-0 sudo[35742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yozthclexvfgsgospvifpwagzunysdvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622628.6904106-173-57452134615660/AnsiballZ_selinux.py'
Jan 05 14:17:09 compute-0 sudo[35742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:09 compute-0 python3.9[35744]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 05 14:17:09 compute-0 sudo[35742]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:10 compute-0 sudo[35894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hykfqjlqapuodgywyscgbsphgfhyvrra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622630.1071947-184-44879154088126/AnsiballZ_command.py'
Jan 05 14:17:10 compute-0 sudo[35894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:10 compute-0 python3.9[35896]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 05 14:17:11 compute-0 sudo[35894]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:12 compute-0 sudo[36047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewhrdevmsulwmrjtnrlpfxuchlevegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622632.039481-192-41035726268528/AnsiballZ_file.py'
Jan 05 14:17:12 compute-0 sudo[36047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:14 compute-0 python3.9[36049]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:17:14 compute-0 sudo[36047]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:15 compute-0 sudo[36199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzegdmvmibepgwldfqryzcmsasmyvmcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622634.6973968-200-153882214775017/AnsiballZ_mount.py'
Jan 05 14:17:15 compute-0 sudo[36199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:15 compute-0 python3.9[36201]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 05 14:17:15 compute-0 sudo[36199]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:16 compute-0 sudo[36351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlxuvqdzvilszkwkqavrajujlnjkssst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622636.1783895-228-261666337447591/AnsiballZ_file.py'
Jan 05 14:17:16 compute-0 sudo[36351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:16 compute-0 python3.9[36353]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:17:16 compute-0 sudo[36351]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:17 compute-0 sudo[36503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvqjzhctoxscwfqgihuofkgelztehwem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622636.8167787-236-192120623218856/AnsiballZ_stat.py'
Jan 05 14:17:17 compute-0 sudo[36503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:20 compute-0 python3.9[36505]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:17:20 compute-0 irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 05 14:17:20 compute-0 irqbalance[782]: IRQ 26 affinity is now unmanaged
Jan 05 14:17:20 compute-0 sudo[36503]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:20 compute-0 sudo[36626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcjmtoacxeyhxqwwfpwqyhjepqmdqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622636.8167787-236-192120623218856/AnsiballZ_copy.py'
Jan 05 14:17:20 compute-0 sudo[36626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:22 compute-0 python3.9[36628]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622636.8167787-236-192120623218856/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:17:22 compute-0 sudo[36626]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:23 compute-0 sudo[36778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhenqvfosgnmymxxxvmoxmzzvgugnjwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622643.2216496-260-134630279202604/AnsiballZ_stat.py'
Jan 05 14:17:23 compute-0 sudo[36778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:23 compute-0 python3.9[36780]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:17:23 compute-0 sudo[36778]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:24 compute-0 sudo[36930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczzgyneyaslxzrdwsszdhavbnpyqmtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622643.953568-268-122665053601319/AnsiballZ_command.py'
Jan 05 14:17:24 compute-0 sudo[36930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:25 compute-0 python3.9[36932]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:25 compute-0 sudo[36930]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:25 compute-0 sudo[37083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viazimsfuvbpcruyxhekqnvojmdylzbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622645.4030366-276-124098726589190/AnsiballZ_file.py'
Jan 05 14:17:25 compute-0 sudo[37083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:25 compute-0 python3.9[37085]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:17:25 compute-0 sudo[37083]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:26 compute-0 sudo[37236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrrukdcszgcsyivaolzfwlmgustvrzvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622646.2621531-287-176696334683896/AnsiballZ_getent.py'
Jan 05 14:17:26 compute-0 sudo[37236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:26 compute-0 python3.9[37238]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 05 14:17:26 compute-0 sudo[37236]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:27 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:17:27 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:17:27 compute-0 sudo[37390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnpmnrcbbtztrofbozogrrpkxkoqbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622647.1837003-295-210480995551879/AnsiballZ_group.py'
Jan 05 14:17:27 compute-0 sudo[37390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:27 compute-0 python3.9[37392]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:17:27 compute-0 groupadd[37393]: group added to /etc/group: name=qemu, GID=107
Jan 05 14:17:27 compute-0 groupadd[37393]: group added to /etc/gshadow: name=qemu
Jan 05 14:17:27 compute-0 groupadd[37393]: new group: name=qemu, GID=107
Jan 05 14:17:27 compute-0 sudo[37390]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:28 compute-0 sudo[37548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slglegosaeifrovbvvwnldevffirmphn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622648.2173426-303-62643672913756/AnsiballZ_user.py'
Jan 05 14:17:28 compute-0 sudo[37548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:28 compute-0 python3.9[37550]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 14:17:28 compute-0 useradd[37552]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 14:17:29 compute-0 sudo[37548]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:29 compute-0 sudo[37708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykjfbsavpekgncnkcxiefbjlkzcqgzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622649.2848217-311-78838413723708/AnsiballZ_getent.py'
Jan 05 14:17:29 compute-0 sudo[37708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:29 compute-0 python3.9[37710]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 05 14:17:29 compute-0 sudo[37708]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:30 compute-0 sudo[37861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakhgmaxhywzyyxfqskvupslbmyfxcwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622650.0110269-319-132480167900876/AnsiballZ_group.py'
Jan 05 14:17:30 compute-0 sudo[37861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:30 compute-0 python3.9[37863]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:17:30 compute-0 groupadd[37864]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 05 14:17:30 compute-0 groupadd[37864]: group added to /etc/gshadow: name=hugetlbfs
Jan 05 14:17:30 compute-0 groupadd[37864]: new group: name=hugetlbfs, GID=42477
Jan 05 14:17:30 compute-0 sudo[37861]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:31 compute-0 sudo[38019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewsmyofiqxgozhjfzlgwnuyaxhwyjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622650.8975246-328-23835272461025/AnsiballZ_file.py'
Jan 05 14:17:31 compute-0 sudo[38019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:31 compute-0 python3.9[38021]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 05 14:17:31 compute-0 sudo[38019]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:32 compute-0 sudo[38171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxvoreoxobrocpcblghgxawwxopojjhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622651.7909002-339-176636327418475/AnsiballZ_dnf.py'
Jan 05 14:17:32 compute-0 sudo[38171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:32 compute-0 python3.9[38173]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:17:34 compute-0 sudo[38171]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:34 compute-0 sudo[38324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnkfpstdpbgrnkvrbpdkpxdntjgndudl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622654.346112-347-52841125471817/AnsiballZ_file.py'
Jan 05 14:17:34 compute-0 sudo[38324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:34 compute-0 python3.9[38326]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:17:34 compute-0 sudo[38324]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:35 compute-0 sudo[38476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgefkvhyogrymbfzldpupynjivwzgqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622655.0510793-355-265864366168945/AnsiballZ_stat.py'
Jan 05 14:17:35 compute-0 sudo[38476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:35 compute-0 python3.9[38478]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:17:35 compute-0 sudo[38476]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:36 compute-0 sudo[38599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdopvxuvaautuxadlzdohsbglwjfgnxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622655.0510793-355-265864366168945/AnsiballZ_copy.py'
Jan 05 14:17:36 compute-0 sudo[38599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:36 compute-0 python3.9[38601]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767622655.0510793-355-265864366168945/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:17:36 compute-0 sudo[38599]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:37 compute-0 sudo[38751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzvqdwsdfjupevsqmvtpplvzmzpzcpxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622656.4385588-370-193195975048634/AnsiballZ_systemd.py'
Jan 05 14:17:37 compute-0 sudo[38751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:37 compute-0 python3.9[38753]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:17:37 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 14:17:37 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 05 14:17:37 compute-0 kernel: Bridge firewalling registered
Jan 05 14:17:37 compute-0 systemd-modules-load[38757]: Inserted module 'br_netfilter'
Jan 05 14:17:37 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 14:17:37 compute-0 sudo[38751]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:38 compute-0 sudo[38911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitljdwqnzfdznlhfmjshfbgextmkmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622657.7352657-378-36799427792276/AnsiballZ_stat.py'
Jan 05 14:17:38 compute-0 sudo[38911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:38 compute-0 python3.9[38913]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:17:38 compute-0 sudo[38911]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:38 compute-0 sudo[39034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnmckfnqyntlkcmilzhrmlbrqidrixir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622657.7352657-378-36799427792276/AnsiballZ_copy.py'
Jan 05 14:17:38 compute-0 sudo[39034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:38 compute-0 python3.9[39036]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767622657.7352657-378-36799427792276/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:17:38 compute-0 sudo[39034]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:39 compute-0 sudo[39186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbhjwhsrnqjhuamofexykfdqzlaguptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622659.2583392-396-102851027257843/AnsiballZ_dnf.py'
Jan 05 14:17:39 compute-0 sudo[39186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:39 compute-0 python3.9[39188]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:17:43 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:17:43 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:17:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:17:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:17:44 compute-0 systemd[1]: Reloading.
Jan 05 14:17:44 compute-0 systemd-rc-local-generator[39245]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:17:44 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:17:45 compute-0 sudo[39186]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:46 compute-0 python3.9[40392]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:17:47 compute-0 python3.9[41272]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 05 14:17:48 compute-0 python3.9[41972]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:17:48 compute-0 sudo[42827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhbjgcmazjesojtwdghzumbrhwyoulio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622668.4403808-435-19288912762583/AnsiballZ_command.py'
Jan 05 14:17:48 compute-0 sudo[42827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:48 compute-0 python3.9[42845]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:48 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 05 14:17:49 compute-0 systemd[1]: Starting Authorization Manager...
Jan 05 14:17:49 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 05 14:17:49 compute-0 polkitd[43557]: Started polkitd version 0.117
Jan 05 14:17:49 compute-0 polkitd[43557]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 14:17:49 compute-0 polkitd[43557]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 14:17:49 compute-0 polkitd[43557]: Finished loading, compiling and executing 2 rules
Jan 05 14:17:49 compute-0 systemd[1]: Started Authorization Manager.
Jan 05 14:17:49 compute-0 polkitd[43557]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 05 14:17:49 compute-0 sudo[42827]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:17:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:17:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.557s CPU time.
Jan 05 14:17:49 compute-0 systemd[1]: run-rdcc18dcf4d2f4c67b8b2dd10c034799c.service: Deactivated successfully.
Jan 05 14:17:50 compute-0 sudo[43745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isbpugkcbtzloogubvzrfazluueqspih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622669.7309196-444-259657723644916/AnsiballZ_systemd.py'
Jan 05 14:17:50 compute-0 sudo[43745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:50 compute-0 python3.9[43747]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:17:50 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 05 14:17:50 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 05 14:17:50 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 05 14:17:50 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 05 14:17:50 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 05 14:17:50 compute-0 sudo[43745]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:51 compute-0 python3.9[43908]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 05 14:17:53 compute-0 sudo[44058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csofwgqlpprldibgkqsitwulawzayciw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622673.294087-501-226064227123195/AnsiballZ_systemd.py'
Jan 05 14:17:53 compute-0 sudo[44058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:53 compute-0 python3.9[44060]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:17:54 compute-0 systemd[1]: Reloading.
Jan 05 14:17:54 compute-0 systemd-rc-local-generator[44090]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:17:54 compute-0 sudo[44058]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:54 compute-0 sudo[44247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkgujlcokhwvixjzmrroqekkuekowkuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622674.4550152-501-50833680735736/AnsiballZ_systemd.py'
Jan 05 14:17:54 compute-0 sudo[44247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:55 compute-0 python3.9[44249]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:17:55 compute-0 systemd[1]: Reloading.
Jan 05 14:17:55 compute-0 systemd-rc-local-generator[44279]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:17:55 compute-0 sudo[44247]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:56 compute-0 sudo[44436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkvqzqdkdlargafxqhtepwqxevdgjoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622675.7189453-517-141248285960789/AnsiballZ_command.py'
Jan 05 14:17:56 compute-0 sudo[44436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:56 compute-0 python3.9[44438]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:56 compute-0 sudo[44436]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:56 compute-0 sudo[44589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evdzcggbgcibpdxbahqquyyrhmnrhevz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622676.4378097-525-101116828598021/AnsiballZ_command.py'
Jan 05 14:17:56 compute-0 sudo[44589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:56 compute-0 python3.9[44591]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:56 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 05 14:17:56 compute-0 sudo[44589]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:57 compute-0 sudo[44742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkygzqccjtddeuciiwodgtoadfzmyzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622677.206503-533-267818877424805/AnsiballZ_command.py'
Jan 05 14:17:57 compute-0 sudo[44742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:17:57 compute-0 python3.9[44744]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:17:59 compute-0 sudo[44742]: pam_unix(sudo:session): session closed for user root
Jan 05 14:17:59 compute-0 sudo[44904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkkbcipaidwlrlvahxzciyawnvpivbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622679.481609-541-36939154360300/AnsiballZ_command.py'
Jan 05 14:17:59 compute-0 sudo[44904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:00 compute-0 python3.9[44906]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:18:00 compute-0 sudo[44904]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:00 compute-0 sudo[45057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcsqhlcxfqqorzsryjcljrkqzjmkszok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622680.1920378-549-268916243078949/AnsiballZ_systemd.py'
Jan 05 14:18:00 compute-0 sudo[45057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:00 compute-0 python3.9[45059]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:18:00 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 05 14:18:00 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 05 14:18:00 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 05 14:18:00 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 05 14:18:00 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 05 14:18:00 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 05 14:18:00 compute-0 sudo[45057]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:01 compute-0 sshd-session[31443]: Connection closed by 192.168.122.30 port 37456
Jan 05 14:18:01 compute-0 sshd-session[31440]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:18:01 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 05 14:18:01 compute-0 systemd[1]: session-10.scope: Consumed 2min 18.704s CPU time.
Jan 05 14:18:01 compute-0 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Jan 05 14:18:01 compute-0 systemd-logind[795]: Removed session 10.
Jan 05 14:18:06 compute-0 sshd-session[45089]: Accepted publickey for zuul from 192.168.122.30 port 53356 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:18:06 compute-0 systemd-logind[795]: New session 11 of user zuul.
Jan 05 14:18:06 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 05 14:18:06 compute-0 sshd-session[45089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:18:08 compute-0 python3.9[45242]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:18:09 compute-0 python3.9[45396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:18:10 compute-0 sudo[45550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuqmcndtovlmmiurdtvnaggbvayeaiai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622689.9547021-50-85802722169773/AnsiballZ_command.py'
Jan 05 14:18:10 compute-0 sudo[45550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:10 compute-0 python3.9[45552]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:18:10 compute-0 sudo[45550]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:11 compute-0 python3.9[45703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:18:12 compute-0 sudo[45857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fynfpiznlgwoxatchzraukvlcicpoghd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622692.1383538-70-208157928218213/AnsiballZ_setup.py'
Jan 05 14:18:12 compute-0 sudo[45857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:12 compute-0 python3.9[45859]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:18:13 compute-0 sudo[45857]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:13 compute-0 sudo[45941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahcbcmdaytpvjwogdnlnykjmigugkumh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622692.1383538-70-208157928218213/AnsiballZ_dnf.py'
Jan 05 14:18:13 compute-0 sudo[45941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:13 compute-0 python3.9[45943]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:18:15 compute-0 sudo[45941]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:16 compute-0 sudo[46094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqzsxrokebdmipxzvxlfxyxwghfgiky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622695.3040924-82-80457547384518/AnsiballZ_setup.py'
Jan 05 14:18:16 compute-0 sudo[46094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:16 compute-0 python3.9[46096]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:18:16 compute-0 sudo[46094]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:17 compute-0 sudo[46265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtjaowvareizrseujmdjiqsqizlaryvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622696.8763309-93-189872567767464/AnsiballZ_file.py'
Jan 05 14:18:17 compute-0 sudo[46265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:17 compute-0 python3.9[46267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:18:17 compute-0 sudo[46265]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:18 compute-0 sudo[46417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quqibxnuliwuteejnimenhcjrtrotyob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622697.9798274-101-187325431478926/AnsiballZ_command.py'
Jan 05 14:18:18 compute-0 sudo[46417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:18 compute-0 python3.9[46419]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:18:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1178665571-merged.mount: Deactivated successfully.
Jan 05 14:18:18 compute-0 podman[46420]: 2026-01-05 14:18:18.871611535 +0000 UTC m=+0.288794894 system refresh
Jan 05 14:18:18 compute-0 sudo[46417]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:18:19 compute-0 sudo[46581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obupjiewvgaeosuswtxxcmdqpbkzazzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622699.3363304-109-144146539375252/AnsiballZ_stat.py'
Jan 05 14:18:19 compute-0 sudo[46581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:20 compute-0 python3.9[46583]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:18:20 compute-0 sudo[46581]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:20 compute-0 sudo[46704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znmmqvtvkhpubttkrtztpwioneagbhiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622699.3363304-109-144146539375252/AnsiballZ_copy.py'
Jan 05 14:18:20 compute-0 sudo[46704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:20 compute-0 python3.9[46706]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622699.3363304-109-144146539375252/.source.json follow=False _original_basename=podman_network_config.j2 checksum=20bd2301a7edde0323522dda0d60974606fc86f1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:18:20 compute-0 sudo[46704]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:21 compute-0 sudo[46856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kclynowlrnyokmdgpmzsnpuzxaxilimn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622701.138059-124-264966846325159/AnsiballZ_stat.py'
Jan 05 14:18:21 compute-0 sudo[46856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:21 compute-0 python3.9[46858]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:18:21 compute-0 sudo[46856]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:21 compute-0 sudo[46979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utvkyahjnahmtfquyzgzsdxrofxcscsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622701.138059-124-264966846325159/AnsiballZ_copy.py'
Jan 05 14:18:22 compute-0 sudo[46979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:22 compute-0 python3.9[46981]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767622701.138059-124-264966846325159/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5248920f79a1cb67b3ef013f523e4500b06a731f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:18:22 compute-0 sudo[46979]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:22 compute-0 sudo[47131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzxealnhmddmifltngxxlziypqitgjmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622702.4033594-140-54286220220761/AnsiballZ_ini_file.py'
Jan 05 14:18:22 compute-0 sudo[47131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:23 compute-0 python3.9[47133]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:18:23 compute-0 sudo[47131]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:23 compute-0 sudo[47283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yejbjsgoenvdkwtvprlodybcexmxlgaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622703.2817502-140-274269420796704/AnsiballZ_ini_file.py'
Jan 05 14:18:23 compute-0 sudo[47283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:23 compute-0 python3.9[47285]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:18:23 compute-0 sudo[47283]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:24 compute-0 sudo[47435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgagcaihfsnqxrkkewjqglclscbbagc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622703.9495168-140-118100065359449/AnsiballZ_ini_file.py'
Jan 05 14:18:24 compute-0 sudo[47435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:24 compute-0 python3.9[47437]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:18:24 compute-0 sudo[47435]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:25 compute-0 sudo[47587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxvrcuqrnnyhgthmhwglyjphzasftyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622704.7130404-140-114657580199741/AnsiballZ_ini_file.py'
Jan 05 14:18:25 compute-0 sudo[47587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:25 compute-0 python3.9[47589]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:18:25 compute-0 sudo[47587]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:26 compute-0 python3.9[47739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:18:26 compute-0 sudo[47891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbucbifqmdslnetwqewapigbeodhaaln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622706.355114-180-259278456833021/AnsiballZ_dnf.py'
Jan 05 14:18:26 compute-0 sudo[47891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:26 compute-0 python3.9[47893]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:28 compute-0 sudo[47891]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:28 compute-0 sudo[48044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-manacomyuboipuaavccmcnqnuzoqdwbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622708.406964-188-276224950027262/AnsiballZ_dnf.py'
Jan 05 14:18:28 compute-0 sudo[48044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:29 compute-0 python3.9[48046]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:30 compute-0 sudo[48044]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:31 compute-0 sudo[48204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puqpmslqclnetvbjmnyiiusyvnffdyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622710.8989413-198-56755571319749/AnsiballZ_dnf.py'
Jan 05 14:18:31 compute-0 sudo[48204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:31 compute-0 python3.9[48206]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:32 compute-0 sudo[48204]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:33 compute-0 sudo[48357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpnbqwbilztfqnxlkvyowyzrkevewikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622712.936921-207-193149444290554/AnsiballZ_dnf.py'
Jan 05 14:18:33 compute-0 sudo[48357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:33 compute-0 python3.9[48359]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:34 compute-0 sudo[48357]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:35 compute-0 sudo[48510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpymwljxjmtxopcogigbwejxlwllkmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622715.2503183-218-246761882524392/AnsiballZ_dnf.py'
Jan 05 14:18:35 compute-0 sudo[48510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:35 compute-0 python3.9[48512]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:37 compute-0 sudo[48510]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:37 compute-0 sudo[48666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzelicausezrpojkiwhiujnraienfdlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622717.5634806-226-253008239398810/AnsiballZ_dnf.py'
Jan 05 14:18:37 compute-0 sudo[48666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:38 compute-0 python3.9[48668]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:40 compute-0 sudo[48666]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:41 compute-0 sudo[48835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osavzcolpysiqfvvytxzbtsgmoawuhdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622720.8014252-235-249860601675872/AnsiballZ_dnf.py'
Jan 05 14:18:41 compute-0 sudo[48835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:41 compute-0 python3.9[48837]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:42 compute-0 sudo[48835]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:43 compute-0 sudo[48988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adfhbtdhuctyxlpyjflbqvbruecjdpqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622722.9618893-244-168763343209864/AnsiballZ_dnf.py'
Jan 05 14:18:43 compute-0 sudo[48988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:43 compute-0 python3.9[48990]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:57 compute-0 sudo[48988]: pam_unix(sudo:session): session closed for user root
Jan 05 14:18:58 compute-0 sudo[49326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzqjgivufuegbkxljfhwucxkajnwajsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622737.892719-253-252417932886633/AnsiballZ_dnf.py'
Jan 05 14:18:58 compute-0 sudo[49326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:18:58 compute-0 python3.9[49328]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:18:59 compute-0 sudo[49326]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:00 compute-0 sudo[49482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tafebgpobvlbkangpavjnbkjubkbwuwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622740.1026156-263-63649575351344/AnsiballZ_dnf.py'
Jan 05 14:19:00 compute-0 sudo[49482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:00 compute-0 python3.9[49484]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:19:02 compute-0 sudo[49482]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:02 compute-0 sudo[49639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkknylpndcmzkosafpomshehsylbsmrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622742.5689936-274-149668153583254/AnsiballZ_file.py'
Jan 05 14:19:02 compute-0 sudo[49639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:03 compute-0 python3.9[49641]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:19:03 compute-0 sudo[49639]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:03 compute-0 sudo[49814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgvlyifcdpakflbsrwlkrlaqqnnikmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622743.3256829-282-103451628653453/AnsiballZ_stat.py'
Jan 05 14:19:03 compute-0 sudo[49814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:03 compute-0 python3.9[49816]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:19:03 compute-0 sudo[49814]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:04 compute-0 sudo[49937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rchlqupdskoivzccquwqrcjolueigrxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622743.3256829-282-103451628653453/AnsiballZ_copy.py'
Jan 05 14:19:04 compute-0 sudo[49937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:04 compute-0 python3.9[49939]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1767622743.3256829-282-103451628653453/.source.json _original_basename=.qddf1ovm follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:19:04 compute-0 sudo[49937]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:05 compute-0 sudo[50089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbqnzsyeerebjmymbgkxufgqpzygcqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622744.9981103-300-100078613343292/AnsiballZ_podman_image.py'
Jan 05 14:19:05 compute-0 sudo[50089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:05 compute-0 python3.9[50091]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3658673000-lower\x2dmapped.mount: Deactivated successfully.
Jan 05 14:19:11 compute-0 podman[50103]: 2026-01-05 14:19:11.583283302 +0000 UTC m=+5.743221537 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 14:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:11 compute-0 sudo[50089]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:12 compute-0 sudo[50401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsktxegkmtphpxjucgbhalmgwsxecso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622752.251278-311-60681350888457/AnsiballZ_podman_image.py'
Jan 05 14:19:12 compute-0 sudo[50401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:12 compute-0 python3.9[50403]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:19:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:24 compute-0 podman[50415]: 2026-01-05 14:19:24.038941886 +0000 UTC m=+11.063882299 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 14:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:24 compute-0 sudo[50401]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:24 compute-0 sudo[50709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drubqusucisfcnhmlxwoeghvvkuplvpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622764.6103315-321-75130481533483/AnsiballZ_podman_image.py'
Jan 05 14:19:24 compute-0 sudo[50709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:25 compute-0 python3.9[50711]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:42 compute-0 podman[50723]: 2026-01-05 14:19:42.714824759 +0000 UTC m=+17.480538748 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 14:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:19:42 compute-0 sudo[50709]: pam_unix(sudo:session): session closed for user root
Jan 05 14:19:43 compute-0 sudo[50991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfmhkljjkllptxbyghajnxpmmbzjfrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622783.3803627-332-138199240600028/AnsiballZ_podman_image.py'
Jan 05 14:19:43 compute-0 sudo[50991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:19:43 compute-0 python3.9[50993]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:02 compute-0 podman[51006]: 2026-01-05 14:20:02.839643111 +0000 UTC m=+18.864279125 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 05 14:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:03 compute-0 sudo[50991]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:03 compute-0 sudo[51326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanxwjxfwgzsjxzfmrgizvbleyfypuhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622803.3247728-332-163185708754046/AnsiballZ_podman_image.py'
Jan 05 14:20:03 compute-0 sudo[51326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:03 compute-0 python3.9[51328]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:05 compute-0 podman[51340]: 2026-01-05 14:20:05.337731533 +0000 UTC m=+1.346058148 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 05 14:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:05 compute-0 sudo[51326]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:06 compute-0 sudo[51611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zddgmqmfqllvybcmarwkossemnxebawz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622805.9515312-348-277401689756207/AnsiballZ_podman_image.py'
Jan 05 14:20:06 compute-0 sudo[51611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:06 compute-0 python3.9[51613]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:09 compute-0 podman[51626]: 2026-01-05 14:20:09.933036101 +0000 UTC m=+3.270875147 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 05 14:20:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:10 compute-0 sudo[51611]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:10 compute-0 sudo[51880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjqcouydriqudmmwaprpwedjqoydvoed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622810.4016316-348-44927992004685/AnsiballZ_podman_image.py'
Jan 05 14:20:10 compute-0 sudo[51880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:11 compute-0 python3.9[51882]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 14:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:17 compute-0 podman[51894]: 2026-01-05 14:20:17.279394627 +0000 UTC m=+6.087101326 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 05 14:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:20:17 compute-0 sudo[51880]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:18 compute-0 sshd-session[45092]: Connection closed by 192.168.122.30 port 53356
Jan 05 14:20:18 compute-0 sshd-session[45089]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:20:18 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 05 14:20:18 compute-0 systemd[1]: session-11.scope: Consumed 2min 51.809s CPU time.
Jan 05 14:20:18 compute-0 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Jan 05 14:20:18 compute-0 systemd-logind[795]: Removed session 11.
Jan 05 14:20:25 compute-0 sshd-session[52138]: Accepted publickey for zuul from 192.168.122.30 port 60936 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:20:25 compute-0 systemd-logind[795]: New session 12 of user zuul.
Jan 05 14:20:25 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 05 14:20:25 compute-0 sshd-session[52138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:20:26 compute-0 python3.9[52291]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:20:27 compute-0 sudo[52445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-betwayjanpbokuypmtrgqxyytjpkycya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622827.2284286-36-102900612661009/AnsiballZ_getent.py'
Jan 05 14:20:27 compute-0 sudo[52445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:27 compute-0 python3.9[52447]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 05 14:20:27 compute-0 sudo[52445]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:28 compute-0 sudo[52598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekocdomazyrtyykcopgwvobccyhaxdgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622828.1136916-44-169699850998986/AnsiballZ_group.py'
Jan 05 14:20:28 compute-0 sudo[52598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:28 compute-0 python3.9[52600]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:20:28 compute-0 groupadd[52601]: group added to /etc/group: name=openvswitch, GID=42476
Jan 05 14:20:28 compute-0 groupadd[52601]: group added to /etc/gshadow: name=openvswitch
Jan 05 14:20:28 compute-0 groupadd[52601]: new group: name=openvswitch, GID=42476
Jan 05 14:20:28 compute-0 sudo[52598]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:29 compute-0 sudo[52756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmixndtuvupnvaqxckrqmbebtoduwyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622829.1405528-52-272053851561969/AnsiballZ_user.py'
Jan 05 14:20:29 compute-0 sudo[52756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:29 compute-0 python3.9[52758]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 14:20:29 compute-0 useradd[52760]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 14:20:29 compute-0 useradd[52760]: add 'openvswitch' to group 'hugetlbfs'
Jan 05 14:20:29 compute-0 useradd[52760]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 05 14:20:29 compute-0 sudo[52756]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:30 compute-0 sudo[52916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmlwunoluwypfvqjldnteawqiexadhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622830.2443633-62-109851230453180/AnsiballZ_setup.py'
Jan 05 14:20:30 compute-0 sudo[52916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:30 compute-0 python3.9[52918]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:20:31 compute-0 sudo[52916]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:31 compute-0 sudo[53000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewowzeqjcbheupplubocnqaeodseycep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622830.2443633-62-109851230453180/AnsiballZ_dnf.py'
Jan 05 14:20:31 compute-0 sudo[53000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:31 compute-0 python3.9[53002]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:20:33 compute-0 sudo[53000]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:34 compute-0 sudo[53162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqhnmtcyjkelkkiraphbuczfsictmaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622833.7672231-76-82296278783145/AnsiballZ_dnf.py'
Jan 05 14:20:34 compute-0 sudo[53162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:34 compute-0 python3.9[53164]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:20:47 compute-0 kernel: SELinux:  Converting 2729 SID table entries...
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:20:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:20:47 compute-0 groupadd[53187]: group added to /etc/group: name=unbound, GID=994
Jan 05 14:20:47 compute-0 groupadd[53187]: group added to /etc/gshadow: name=unbound
Jan 05 14:20:47 compute-0 groupadd[53187]: new group: name=unbound, GID=994
Jan 05 14:20:47 compute-0 useradd[53194]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 05 14:20:47 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 05 14:20:47 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 05 14:20:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:20:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:20:48 compute-0 systemd[1]: Reloading.
Jan 05 14:20:49 compute-0 systemd-rc-local-generator[53693]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:20:49 compute-0 systemd-sysv-generator[53696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:20:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:20:49 compute-0 sudo[53162]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:20:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:20:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.038s CPU time.
Jan 05 14:20:49 compute-0 systemd[1]: run-r3a308f9654734e28bf7921daa0c11ff4.service: Deactivated successfully.
Jan 05 14:20:50 compute-0 sudo[54260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slmikjfiutyolsseebkrygbsueaknlrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622849.9430213-84-31000608416556/AnsiballZ_systemd.py'
Jan 05 14:20:50 compute-0 sudo[54260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:50 compute-0 python3.9[54262]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:20:51 compute-0 systemd[1]: Reloading.
Jan 05 14:20:51 compute-0 systemd-rc-local-generator[54290]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:20:51 compute-0 systemd-sysv-generator[54294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:20:51 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 05 14:20:51 compute-0 chown[54303]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 05 14:20:51 compute-0 ovs-ctl[54308]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 05 14:20:51 compute-0 ovs-ctl[54308]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 05 14:20:51 compute-0 ovs-ctl[54308]: Starting ovsdb-server [  OK  ]
Jan 05 14:20:51 compute-0 ovs-vsctl[54357]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 05 14:20:51 compute-0 ovs-vsctl[54377]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"82a66401-c715-4a23-aa01-55f1bbd6f669\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 05 14:20:51 compute-0 ovs-ctl[54308]: Configuring Open vSwitch system IDs [  OK  ]
Jan 05 14:20:51 compute-0 ovs-ctl[54308]: Enabling remote OVSDB managers [  OK  ]
Jan 05 14:20:51 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 05 14:20:51 compute-0 ovs-vsctl[54383]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 05 14:20:51 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 05 14:20:51 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 05 14:20:51 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 05 14:20:51 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 05 14:20:51 compute-0 ovs-ctl[54428]: Inserting openvswitch module [  OK  ]
Jan 05 14:20:52 compute-0 ovs-ctl[54397]: Starting ovs-vswitchd [  OK  ]
Jan 05 14:20:52 compute-0 ovs-vsctl[54448]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 05 14:20:52 compute-0 ovs-ctl[54397]: Enabling remote OVSDB managers [  OK  ]
Jan 05 14:20:52 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 05 14:20:52 compute-0 systemd[1]: Starting Open vSwitch...
Jan 05 14:20:52 compute-0 systemd[1]: Finished Open vSwitch.
Jan 05 14:20:52 compute-0 sudo[54260]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:53 compute-0 python3.9[54600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:20:54 compute-0 sudo[54750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igjefgwsbwexrjvyktpjfnedwsepuidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622853.4739478-102-229604588407350/AnsiballZ_sefcontext.py'
Jan 05 14:20:54 compute-0 sudo[54750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:54 compute-0 python3.9[54752]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 05 14:20:55 compute-0 kernel: SELinux:  Converting 2743 SID table entries...
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:20:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:20:55 compute-0 sudo[54750]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:56 compute-0 python3.9[54907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:20:57 compute-0 sudo[55063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apqjsynpizqtassajlyjldywqpfofopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622857.2355134-120-88741796280550/AnsiballZ_dnf.py'
Jan 05 14:20:57 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 05 14:20:57 compute-0 sudo[55063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:57 compute-0 python3.9[55065]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:20:59 compute-0 sudo[55063]: pam_unix(sudo:session): session closed for user root
Jan 05 14:20:59 compute-0 sudo[55216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqucytvujajxwkvlxgglkmyxdmytvrhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622859.2496943-128-79952714528461/AnsiballZ_command.py'
Jan 05 14:20:59 compute-0 sudo[55216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:20:59 compute-0 python3.9[55218]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:21:00 compute-0 sudo[55216]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:01 compute-0 sudo[55503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzabwtddyqhrykhlfxvsjrhqkwmkttmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622860.8954978-136-39950380802737/AnsiballZ_file.py'
Jan 05 14:21:01 compute-0 sudo[55503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:01 compute-0 python3.9[55505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 05 14:21:01 compute-0 sudo[55503]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:02 compute-0 python3.9[55655]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:21:03 compute-0 sudo[55807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoeiebifhanzaegyrgoubmhjctusycnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622862.7102478-152-55956521620256/AnsiballZ_dnf.py'
Jan 05 14:21:03 compute-0 sudo[55807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:03 compute-0 python3.9[55809]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:21:05 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:21:05 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:21:05 compute-0 systemd[1]: Reloading.
Jan 05 14:21:05 compute-0 systemd-rc-local-generator[55849]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:21:05 compute-0 systemd-sysv-generator[55852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:21:05 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:21:06 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:21:06 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:21:06 compute-0 systemd[1]: run-rbede219b719741e489f9c85720530b17.service: Deactivated successfully.
Jan 05 14:21:06 compute-0 sudo[55807]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:06 compute-0 sudo[56123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijvcdnygpohoutusouhswzrfcblpuki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622866.4962776-160-6416864035999/AnsiballZ_systemd.py'
Jan 05 14:21:06 compute-0 sudo[56123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:07 compute-0 python3.9[56125]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:21:07 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 05 14:21:07 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 05 14:21:07 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2429] caught SIGTERM, shutting down normally.
Jan 05 14:21:07 compute-0 systemd[1]: Stopping Network Manager...
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2442] dhcp4 (eth0): canceled DHCP transaction
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2442] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2442] dhcp4 (eth0): state changed no lease
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2444] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 14:21:07 compute-0 NetworkManager[7182]: <info>  [1767622867.2520] exiting (success)
Jan 05 14:21:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 14:21:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 14:21:07 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 05 14:21:07 compute-0 systemd[1]: Stopped Network Manager.
Jan 05 14:21:07 compute-0 systemd[1]: NetworkManager.service: Consumed 15.242s CPU time, 4.1M memory peak, read 0B from disk, written 30.5K to disk.
Jan 05 14:21:07 compute-0 systemd[1]: Starting Network Manager...
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.3246] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:4a842e6d-ff22-4aef-a67c-1e6f6b9a395f)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.3249] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.3298] manager[0x557f470bd000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 14:21:07 compute-0 systemd[1]: Starting Hostname Service...
Jan 05 14:21:07 compute-0 systemd[1]: Started Hostname Service.
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4393] hostname: hostname: using hostnamed
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4395] hostname: static hostname changed from (none) to "compute-0"
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4400] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4405] manager[0x557f470bd000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4406] manager[0x557f470bd000]: rfkill: WWAN hardware radio set enabled
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4431] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4441] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4441] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4442] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4442] manager: Networking is enabled by state file
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4445] settings: Loaded settings plugin: keyfile (internal)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4450] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4475] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4484] dhcp: init: Using DHCP client 'internal'
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4487] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4492] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4497] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4506] device (lo): Activation: starting connection 'lo' (3df85b44-84fa-4707-aff2-a3490d11ca8e)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4514] device (eth0): carrier: link connected
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4518] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4524] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4525] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4531] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4538] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4546] device (eth1): carrier: link connected
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4552] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4558] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f6c38ead-36b2-5d84-9f47-323474c4e071) (indicated)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4558] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4565] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4572] device (eth1): Activation: starting connection 'ci-private-network' (f6c38ead-36b2-5d84-9f47-323474c4e071)
Jan 05 14:21:07 compute-0 systemd[1]: Started Network Manager.
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4582] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4591] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4594] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4597] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4599] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4603] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4606] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4610] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4615] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4624] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4629] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4640] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4659] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4698] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4703] dhcp4 (eth0): state changed new lease, address=38.102.83.115
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4708] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4718] device (lo): Activation: successful, device activated.
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4735] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 14:21:07 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4826] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4835] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4837] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4840] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4843] device (eth1): Activation: successful, device activated.
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4866] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4868] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4874] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4878] device (eth0): Activation: successful, device activated.
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4886] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 14:21:07 compute-0 NetworkManager[56139]: <info>  [1767622867.4890] manager: startup complete
Jan 05 14:21:07 compute-0 sudo[56123]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:07 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 05 14:21:08 compute-0 sudo[56349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucvjvaejbjvnzkwfkjihewogjuywepgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622867.7440119-168-128322646654486/AnsiballZ_dnf.py'
Jan 05 14:21:08 compute-0 sudo[56349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:08 compute-0 python3.9[56351]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:21:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:21:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:21:13 compute-0 systemd[1]: Reloading.
Jan 05 14:21:13 compute-0 systemd-rc-local-generator[56407]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:21:13 compute-0 systemd-sysv-generator[56410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:21:13 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:21:14 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:21:14 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:21:14 compute-0 systemd[1]: run-r8e639d44ebc64881ba4bcc2c9a9a0baa.service: Deactivated successfully.
Jan 05 14:21:14 compute-0 sudo[56349]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:14 compute-0 sudo[56811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppngwstgztjfnfwgkelxnslshvhfpwgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622874.6238804-180-249367471497033/AnsiballZ_stat.py'
Jan 05 14:21:14 compute-0 sudo[56811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:15 compute-0 python3.9[56813]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:21:15 compute-0 sudo[56811]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:15 compute-0 sudo[56963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyqekmpukflqbagxusugdaaacvmlgfvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622875.4420166-189-16297393442012/AnsiballZ_ini_file.py'
Jan 05 14:21:15 compute-0 sudo[56963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:16 compute-0 python3.9[56965]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:16 compute-0 sudo[56963]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:16 compute-0 sudo[57117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxaeuovksudrlhomnnhvsbkjqmecwauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622876.5316806-199-243629084048159/AnsiballZ_ini_file.py'
Jan 05 14:21:16 compute-0 sudo[57117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:17 compute-0 python3.9[57119]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:17 compute-0 sudo[57117]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 14:21:17 compute-0 sudo[57269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvlponnuhsxcbynozgofvqtlfrsxbki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622877.3007817-199-86851300347000/AnsiballZ_ini_file.py'
Jan 05 14:21:17 compute-0 sudo[57269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:17 compute-0 python3.9[57271]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:17 compute-0 sudo[57269]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:18 compute-0 sudo[57421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccsfsrbirvexfofalzmrjjpzrjccnked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622878.0767367-214-183774765757230/AnsiballZ_ini_file.py'
Jan 05 14:21:18 compute-0 sudo[57421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:18 compute-0 python3.9[57423]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:18 compute-0 sudo[57421]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:19 compute-0 sudo[57573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdxzmnkiqanttfawkvabgadzeqgqfne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622878.859622-214-56951061047796/AnsiballZ_ini_file.py'
Jan 05 14:21:19 compute-0 sudo[57573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:19 compute-0 python3.9[57575]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:19 compute-0 sudo[57573]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:20 compute-0 sudo[57725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvrhqiroalplgzmtgfpwkezhnojwllza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622879.5863907-229-123155521465752/AnsiballZ_stat.py'
Jan 05 14:21:20 compute-0 sudo[57725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:20 compute-0 python3.9[57727]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:21:20 compute-0 sudo[57725]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:20 compute-0 sudo[57848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefqjqicltgjwwiqdlgensguxxuewrge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622879.5863907-229-123155521465752/AnsiballZ_copy.py'
Jan 05 14:21:20 compute-0 sudo[57848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:21 compute-0 python3.9[57850]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622879.5863907-229-123155521465752/.source _original_basename=.4s88uu9c follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:21 compute-0 sudo[57848]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:21 compute-0 sudo[58000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrmdtbgcyeehyvyfcztlhexafzsjrqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622881.323174-244-207995896328545/AnsiballZ_file.py'
Jan 05 14:21:21 compute-0 sudo[58000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:21 compute-0 python3.9[58002]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:21 compute-0 sudo[58000]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:22 compute-0 sudo[58152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpjtkchezvvxrifcymyuhinchbjrzsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622882.1560092-252-247921223049911/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 05 14:21:22 compute-0 sudo[58152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:22 compute-0 python3.9[58154]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 05 14:21:22 compute-0 sudo[58152]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:23 compute-0 sudo[58304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgnsaquggjnlkjmzeqwzwotgmmlahqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622883.06904-261-130110637866790/AnsiballZ_file.py'
Jan 05 14:21:23 compute-0 sudo[58304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:23 compute-0 python3.9[58306]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:23 compute-0 sudo[58304]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:24 compute-0 sudo[58457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxjzflhgeerjhlxmckzxgaoevnuduujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622884.4941204-271-10028988825915/AnsiballZ_stat.py'
Jan 05 14:21:24 compute-0 sudo[58457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:25 compute-0 sudo[58457]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:25 compute-0 sudo[58580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqalynowzaypgniixvpsfwceuemzygch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622884.4941204-271-10028988825915/AnsiballZ_copy.py'
Jan 05 14:21:25 compute-0 sudo[58580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:25 compute-0 sudo[58580]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:26 compute-0 sudo[58732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkglelvyaugfkhfnihzqwsvlpzrelyif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622885.8967857-286-221961829627994/AnsiballZ_slurp.py'
Jan 05 14:21:26 compute-0 sudo[58732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:26 compute-0 python3.9[58734]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 05 14:21:26 compute-0 sudo[58732]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:27 compute-0 sudo[58907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwxwsouxejgexbdmlqruvfqnwtzeduv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622886.8805845-295-144585898514912/async_wrapper.py j215540874146 300 /home/zuul/.ansible/tmp/ansible-tmp-1767622886.8805845-295-144585898514912/AnsiballZ_edpm_os_net_config.py _'
Jan 05 14:21:27 compute-0 sudo[58907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:27 compute-0 ansible-async_wrapper.py[58909]: Invoked with j215540874146 300 /home/zuul/.ansible/tmp/ansible-tmp-1767622886.8805845-295-144585898514912/AnsiballZ_edpm_os_net_config.py _
Jan 05 14:21:27 compute-0 ansible-async_wrapper.py[58912]: Starting module and watcher
Jan 05 14:21:27 compute-0 ansible-async_wrapper.py[58912]: Start watching 58913 (300)
Jan 05 14:21:27 compute-0 ansible-async_wrapper.py[58913]: Start module (58913)
Jan 05 14:21:27 compute-0 ansible-async_wrapper.py[58909]: Return async_wrapper task started.
Jan 05 14:21:27 compute-0 sudo[58907]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:27 compute-0 python3.9[58914]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 05 14:21:28 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 05 14:21:28 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 05 14:21:28 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 05 14:21:28 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 05 14:21:28 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.7712] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.7737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8460] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8462] audit: op="connection-add" uuid="f01c59b2-467b-45cf-a113-81ce59cc5336" name="br-ex-br" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8480] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8482] audit: op="connection-add" uuid="00650865-c0bf-4ec6-b6db-f229cfb57ec5" name="br-ex-port" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8497] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8498] audit: op="connection-add" uuid="88467e32-ea6e-4d7e-af3d-45f804f6c0b9" name="eth1-port" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8513] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8515] audit: op="connection-add" uuid="8696e7ec-ac8e-4be0-9c3d-3679d411fca4" name="vlan20-port" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8529] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8531] audit: op="connection-add" uuid="c77b7bfd-185f-4a8e-a094-8d227a375e58" name="vlan21-port" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8546] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8547] audit: op="connection-add" uuid="caa50f75-e866-47c9-9d6f-b0a587b87c0e" name="vlan22-port" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8569] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8588] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8590] audit: op="connection-add" uuid="58cef28d-8102-46de-9051-56f39c6639b8" name="br-ex-if" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8673] audit: op="connection-update" uuid="f6c38ead-36b2-5d84-9f47-323474c4e071" name="ci-private-network" args="ipv6.method,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv4.method,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ipv4.dns,connection.port-type,connection.master,connection.slave-type,connection.timestamp,connection.controller,ovs-external-ids.data,ovs-interface.type" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8690] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8693] audit: op="connection-add" uuid="90dd4740-874d-4ef1-b3d6-e859cc8d02e4" name="vlan20-if" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8709] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8712] audit: op="connection-add" uuid="f6bfaf70-1f7c-4070-b27b-609a35a96423" name="vlan21-if" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8731] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8733] audit: op="connection-add" uuid="9537eb8b-b541-4c52-bc57-1cea1aa31c03" name="vlan22-if" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8745] audit: op="connection-delete" uuid="25bbf212-db31-38a8-8c4b-a6f883cb4430" name="Wired connection 1" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8759] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8763] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8770] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8776] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f01c59b2-467b-45cf-a113-81ce59cc5336)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8777] audit: op="connection-activate" uuid="f01c59b2-467b-45cf-a113-81ce59cc5336" name="br-ex-br" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8780] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8782] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8788] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8794] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (00650865-c0bf-4ec6-b6db-f229cfb57ec5)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8797] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8799] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8804] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8810] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (88467e32-ea6e-4d7e-af3d-45f804f6c0b9)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8813] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8815] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8821] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8826] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8696e7ec-ac8e-4be0-9c3d-3679d411fca4)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8828] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8829] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8836] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8841] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c77b7bfd-185f-4a8e-a094-8d227a375e58)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8843] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8844] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8852] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8858] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (caa50f75-e866-47c9-9d6f-b0a587b87c0e)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8859] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8861] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8864] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8872] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8873] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8877] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8880] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (58cef28d-8102-46de-9051-56f39c6639b8)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8884] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8888] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8889] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8891] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8895] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8912] device (eth1): disconnecting for new activation request.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8915] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8921] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8926] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8928] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8933] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8935] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8939] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8945] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (90dd4740-874d-4ef1-b3d6-e859cc8d02e4)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8946] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8950] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8953] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8954] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8960] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8962] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8966] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8971] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (f6bfaf70-1f7c-4070-b27b-609a35a96423)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8972] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8976] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8978] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8980] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8984] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <warn>  [1767622889.8985] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8989] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8994] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (9537eb8b-b541-4c52-bc57-1cea1aa31c03)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8995] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.8999] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9002] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9004] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9006] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9020] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9023] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9027] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9029] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9366] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9373] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9378] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9383] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9385] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9391] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9397] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9402] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 kernel: Timeout policy base is empty
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9404] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9410] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 systemd-udevd[58920]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9415] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9420] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9423] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9429] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9436] dhcp4 (eth0): canceled DHCP transaction
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9436] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9436] dhcp4 (eth0): state changed no lease
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9438] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9453] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9458] audit: op="device-reapply" interface="eth1" ifindex=3 pid=58915 uid=0 result="fail" reason="Device is not activated"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9468] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9479] device (eth1): disconnecting for new activation request.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9480] audit: op="connection-activate" uuid="f6c38ead-36b2-5d84-9f47-323474c4e071" name="ci-private-network" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9482] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9524] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9528] dhcp4 (eth0): state changed new lease, address=38.102.83.115
Jan 05 14:21:29 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9585] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58915 uid=0 result="success"
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9591] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9703] device (eth1): Activation: starting connection 'ci-private-network' (f6c38ead-36b2-5d84-9f47-323474c4e071)
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9710] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9717] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9722] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9731] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9736] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9741] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9743] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9745] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9746] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9748] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9754] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9762] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9767] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9773] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9778] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9783] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 kernel: br-ex: entered promiscuous mode
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9788] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9794] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9797] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9800] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9804] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9812] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9849] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9867] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9869] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:29 compute-0 NetworkManager[56139]: <info>  [1767622889.9875] device (eth1): Activation: successful, device activated.
Jan 05 14:21:29 compute-0 kernel: vlan22: entered promiscuous mode
Jan 05 14:21:29 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 05 14:21:29 compute-0 systemd-udevd[58919]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:21:29 compute-0 kernel: vlan20: entered promiscuous mode
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0002] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 05 14:21:30 compute-0 systemd-udevd[58921]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0023] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0037] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0053] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 systemd-udevd[59014]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:21:30 compute-0 kernel: vlan21: entered promiscuous mode
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0062] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0066] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0073] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0135] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0137] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0144] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0163] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0167] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0199] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0209] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0252] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0253] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0255] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0259] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0263] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 14:21:30 compute-0 NetworkManager[56139]: <info>  [1767622890.0266] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.1373] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.4096] checkpoint[0x557f47093950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.4099] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 sudo[59247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hamztzshnkkltqlnjxypsidtomrjzxtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622890.9197948-295-183604480155167/AnsiballZ_async_status.py'
Jan 05 14:21:31 compute-0 sudo[59247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.6694] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.6706] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 python3.9[59249]: ansible-ansible.legacy.async_status Invoked with jid=j215540874146.58909 mode=status _async_dir=/root/.ansible_async
Jan 05 14:21:31 compute-0 sudo[59247]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.8855] audit: op="networking-control" arg="global-dns-configuration" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.8897] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.8939] audit: op="networking-control" arg="global-dns-configuration" pid=58915 uid=0 result="success"
Jan 05 14:21:31 compute-0 NetworkManager[56139]: <info>  [1767622891.9001] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58915 uid=0 result="success"
Jan 05 14:21:32 compute-0 NetworkManager[56139]: <info>  [1767622892.0373] checkpoint[0x557f47093a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 05 14:21:32 compute-0 NetworkManager[56139]: <info>  [1767622892.0376] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=58915 uid=0 result="success"
Jan 05 14:21:32 compute-0 ansible-async_wrapper.py[58913]: Module complete (58913)
Jan 05 14:21:32 compute-0 ansible-async_wrapper.py[58912]: Done in kid B.
Jan 05 14:21:35 compute-0 sudo[59353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdxcvfyqgwnwdnyiajbcjkyjgejntbhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622890.9197948-295-183604480155167/AnsiballZ_async_status.py'
Jan 05 14:21:35 compute-0 sudo[59353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:35 compute-0 python3.9[59355]: ansible-ansible.legacy.async_status Invoked with jid=j215540874146.58909 mode=status _async_dir=/root/.ansible_async
Jan 05 14:21:35 compute-0 sudo[59353]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:35 compute-0 sudo[59452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gutltygmnmuztgttjqwejgdxzggwaakw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622890.9197948-295-183604480155167/AnsiballZ_async_status.py'
Jan 05 14:21:35 compute-0 sudo[59452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:35 compute-0 python3.9[59454]: ansible-ansible.legacy.async_status Invoked with jid=j215540874146.58909 mode=cleanup _async_dir=/root/.ansible_async
Jan 05 14:21:35 compute-0 sudo[59452]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:36 compute-0 sudo[59604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vepqotjnabtoluivaihrpcupfgcikfkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622896.174613-322-94865485269739/AnsiballZ_stat.py'
Jan 05 14:21:36 compute-0 sudo[59604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:36 compute-0 python3.9[59606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:21:36 compute-0 sudo[59604]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:37 compute-0 sudo[59727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjeafjasptebiberlbczgfsysucymbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622896.174613-322-94865485269739/AnsiballZ_copy.py'
Jan 05 14:21:37 compute-0 sudo[59727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:37 compute-0 python3.9[59729]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622896.174613-322-94865485269739/.source.returncode _original_basename=.cvv14vjg follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:37 compute-0 sudo[59727]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:37 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 14:21:37 compute-0 sudo[59882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsvmsyomrkupgavskbxjdrccqvwdpmyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622897.6100168-338-72557142933769/AnsiballZ_stat.py'
Jan 05 14:21:37 compute-0 sudo[59882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:38 compute-0 python3.9[59884]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:21:38 compute-0 sudo[59882]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:38 compute-0 sudo[60006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmdkrijnoiqprzyxgscjqxtwrlyjlicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622897.6100168-338-72557142933769/AnsiballZ_copy.py'
Jan 05 14:21:38 compute-0 sudo[60006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:38 compute-0 python3.9[60008]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622897.6100168-338-72557142933769/.source.cfg _original_basename=.wl69fw86 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:21:38 compute-0 sudo[60006]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:39 compute-0 sudo[60158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbehzfshfeqgnnvkgcjjxojmfeaxfylz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622899.172861-353-161451237849297/AnsiballZ_systemd.py'
Jan 05 14:21:39 compute-0 sudo[60158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:21:39 compute-0 python3.9[60160]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:21:39 compute-0 systemd[1]: Reloading Network Manager...
Jan 05 14:21:39 compute-0 NetworkManager[56139]: <info>  [1767622899.9228] audit: op="reload" arg="0" pid=60164 uid=0 result="success"
Jan 05 14:21:39 compute-0 NetworkManager[56139]: <info>  [1767622899.9236] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 05 14:21:39 compute-0 systemd[1]: Reloaded Network Manager.
Jan 05 14:21:39 compute-0 sudo[60158]: pam_unix(sudo:session): session closed for user root
Jan 05 14:21:40 compute-0 sshd-session[52141]: Connection closed by 192.168.122.30 port 60936
Jan 05 14:21:40 compute-0 sshd-session[52138]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:21:40 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 05 14:21:40 compute-0 systemd[1]: session-12.scope: Consumed 53.625s CPU time.
Jan 05 14:21:40 compute-0 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Jan 05 14:21:40 compute-0 systemd-logind[795]: Removed session 12.
Jan 05 14:21:46 compute-0 sshd-session[60195]: Accepted publickey for zuul from 192.168.122.30 port 43996 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:21:46 compute-0 systemd-logind[795]: New session 13 of user zuul.
Jan 05 14:21:46 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 05 14:21:46 compute-0 sshd-session[60195]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:21:47 compute-0 python3.9[60348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:21:48 compute-0 python3.9[60503]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:21:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 14:21:50 compute-0 python3.9[60693]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:21:50 compute-0 sshd-session[60694]: Connection closed by 165.22.168.95 port 42628
Jan 05 14:21:51 compute-0 sshd-session[60198]: Connection closed by 192.168.122.30 port 43996
Jan 05 14:21:51 compute-0 sshd-session[60195]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:21:51 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 05 14:21:51 compute-0 systemd[1]: session-13.scope: Consumed 2.634s CPU time.
Jan 05 14:21:51 compute-0 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Jan 05 14:21:51 compute-0 systemd-logind[795]: Removed session 13.
Jan 05 14:21:57 compute-0 sshd-session[60722]: Accepted publickey for zuul from 192.168.122.30 port 54272 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:21:57 compute-0 systemd-logind[795]: New session 14 of user zuul.
Jan 05 14:21:57 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 05 14:21:57 compute-0 sshd-session[60722]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:21:58 compute-0 python3.9[60876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:21:59 compute-0 python3.9[61030]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:22:00 compute-0 sudo[61184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-conyrvyngnntsgafuzsedfkxdwhdpchh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622919.988464-40-54476966465001/AnsiballZ_setup.py'
Jan 05 14:22:00 compute-0 sudo[61184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:00 compute-0 python3.9[61186]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:22:00 compute-0 sudo[61184]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:01 compute-0 sudo[61269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyrfpwclktcraquthvnfpiyqtzchxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622919.988464-40-54476966465001/AnsiballZ_dnf.py'
Jan 05 14:22:01 compute-0 sudo[61269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:01 compute-0 python3.9[61271]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:22:02 compute-0 sudo[61269]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:03 compute-0 sudo[61422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjslntgwrovskryavzyrfxlmoxwowauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622923.0722756-52-87263104772694/AnsiballZ_setup.py'
Jan 05 14:22:03 compute-0 sudo[61422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:03 compute-0 python3.9[61424]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:22:03 compute-0 sudo[61422]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:04 compute-0 sudo[61613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkpftmfoskupwiixsjgbednztkkpecih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622924.2460787-63-82604729785426/AnsiballZ_file.py'
Jan 05 14:22:04 compute-0 sudo[61613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:04 compute-0 python3.9[61615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:04 compute-0 sudo[61613]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:05 compute-0 sudo[61765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhyxdvqhfgmfxkzzpozkndgihhrdrkdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622925.348609-71-27603402250852/AnsiballZ_command.py'
Jan 05 14:22:05 compute-0 sudo[61765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:05 compute-0 python3.9[61767]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:22:06 compute-0 sudo[61765]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:06 compute-0 sudo[61929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpktftvtmflvuixeribvnaudvhvvqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622926.2366767-79-87711942754927/AnsiballZ_stat.py'
Jan 05 14:22:06 compute-0 sudo[61929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:06 compute-0 python3.9[61931]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:06 compute-0 sudo[61929]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:07 compute-0 sudo[62007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alcbhzuxzodsjshgxwliyxtbgckaalad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622926.2366767-79-87711942754927/AnsiballZ_file.py'
Jan 05 14:22:07 compute-0 sudo[62007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:07 compute-0 python3.9[62009]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:07 compute-0 sudo[62007]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:08 compute-0 sudo[62159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdlctxmxbodlsfftpejonzxtbrwnbjay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622927.667261-91-154705862309781/AnsiballZ_stat.py'
Jan 05 14:22:08 compute-0 sudo[62159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:08 compute-0 python3.9[62161]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:08 compute-0 sudo[62159]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:08 compute-0 sudo[62237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqtoshhergrhbeiriuoemaoyytkjrwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622927.667261-91-154705862309781/AnsiballZ_file.py'
Jan 05 14:22:08 compute-0 sudo[62237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:08 compute-0 python3.9[62239]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:08 compute-0 sudo[62237]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:09 compute-0 sudo[62389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuhktbfypbhppsfccxcsalaorzueizgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622928.9373446-104-157950621277267/AnsiballZ_ini_file.py'
Jan 05 14:22:09 compute-0 sudo[62389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:09 compute-0 python3.9[62391]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:09 compute-0 sudo[62389]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:10 compute-0 sudo[62541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boxkdlkgqziazvwcyqfqsyjqsgexgvnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622929.7863026-104-1195341409953/AnsiballZ_ini_file.py'
Jan 05 14:22:10 compute-0 sudo[62541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:10 compute-0 python3.9[62543]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:10 compute-0 sudo[62541]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:10 compute-0 sudo[62693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwaozatxqjepbrzjgzdkpxilwyllpuzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622930.575209-104-190835357158181/AnsiballZ_ini_file.py'
Jan 05 14:22:10 compute-0 sudo[62693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:11 compute-0 python3.9[62695]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:11 compute-0 sudo[62693]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:11 compute-0 sudo[62845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlunmolnqgrbvrneihcvwojklzvywhmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622931.2604148-104-99143198956668/AnsiballZ_ini_file.py'
Jan 05 14:22:11 compute-0 sudo[62845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:11 compute-0 python3.9[62847]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:11 compute-0 sudo[62845]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:12 compute-0 sudo[62997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geymexipxswgtklsvozsadgvfdialsom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622932.1235094-135-102269886396252/AnsiballZ_dnf.py'
Jan 05 14:22:12 compute-0 sudo[62997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:12 compute-0 python3.9[62999]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:22:13 compute-0 sudo[62997]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:14 compute-0 sudo[63150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ounllunkknncofvtauraikuzusntldlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622934.5469174-146-80653595129518/AnsiballZ_setup.py'
Jan 05 14:22:14 compute-0 sudo[63150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:15 compute-0 python3.9[63152]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:22:15 compute-0 sudo[63150]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:15 compute-0 sudo[63304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amdlefrjjcpfimbpuachnvcxmuedtkko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622935.4982405-154-266243240668114/AnsiballZ_stat.py'
Jan 05 14:22:15 compute-0 sudo[63304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:16 compute-0 python3.9[63306]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:22:16 compute-0 sudo[63304]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:16 compute-0 sudo[63456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxduyqtennjhdgauwjqsubdrzhamvrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622936.3342385-163-204143850021128/AnsiballZ_stat.py'
Jan 05 14:22:16 compute-0 sudo[63456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:16 compute-0 python3.9[63458]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:22:16 compute-0 sudo[63456]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:17 compute-0 sudo[63608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmbipfcisqugoemwjkupabetizwcqfke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622937.238285-173-109204479870525/AnsiballZ_command.py'
Jan 05 14:22:17 compute-0 sudo[63608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:17 compute-0 python3.9[63610]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:22:17 compute-0 sudo[63608]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:18 compute-0 sudo[63761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhgrzamkkoexhstjtketqfgyatbpydqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622938.1697853-183-274712213831079/AnsiballZ_service_facts.py'
Jan 05 14:22:18 compute-0 sudo[63761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:18 compute-0 python3.9[63763]: ansible-service_facts Invoked
Jan 05 14:22:18 compute-0 network[63780]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:22:18 compute-0 network[63781]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:22:18 compute-0 network[63782]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:22:22 compute-0 sudo[63761]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:23 compute-0 sudo[64065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moktvijzybntqtuejspmfbwkjolvuwbx ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1767622943.3123462-198-80663964197907/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1767622943.3123462-198-80663964197907/args'
Jan 05 14:22:23 compute-0 sudo[64065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:23 compute-0 sudo[64065]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:24 compute-0 sudo[64232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nktdjarvdbuazmhiiivsczbtsjiwnoaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622944.3270292-209-83273025656883/AnsiballZ_dnf.py'
Jan 05 14:22:24 compute-0 sudo[64232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:24 compute-0 python3.9[64234]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:22:26 compute-0 sudo[64232]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:27 compute-0 sudo[64385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpptyfhuwtqawtfxvvxpvzfkchhvlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622946.5117795-222-87150471683676/AnsiballZ_package_facts.py'
Jan 05 14:22:27 compute-0 sudo[64385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:27 compute-0 python3.9[64387]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 05 14:22:27 compute-0 sudo[64385]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:28 compute-0 sudo[64537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzftrbqdvygngxpuaemjtnjyhbzwrhhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622948.1360018-232-221872620572081/AnsiballZ_stat.py'
Jan 05 14:22:28 compute-0 sudo[64537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:28 compute-0 python3.9[64539]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:28 compute-0 sudo[64537]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:29 compute-0 sudo[64662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bapwpaleeqiegawnbfovusawnussiemn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622948.1360018-232-221872620572081/AnsiballZ_copy.py'
Jan 05 14:22:29 compute-0 sudo[64662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:29 compute-0 python3.9[64664]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622948.1360018-232-221872620572081/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:29 compute-0 sudo[64662]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:30 compute-0 sudo[64816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgndxgxjdnufolfuvwczetdgsegroblp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622949.9555383-247-119427160407652/AnsiballZ_stat.py'
Jan 05 14:22:30 compute-0 sudo[64816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:30 compute-0 python3.9[64818]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:30 compute-0 sudo[64816]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:31 compute-0 sudo[64941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsfnovtbrrtlrdvdpwwwagqxxlvnihsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622949.9555383-247-119427160407652/AnsiballZ_copy.py'
Jan 05 14:22:31 compute-0 sudo[64941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:31 compute-0 python3.9[64943]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622949.9555383-247-119427160407652/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:31 compute-0 sudo[64941]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:32 compute-0 sudo[65095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgkdpantelulnfakcskxclbjxlxdvlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622951.8967996-268-7103593022714/AnsiballZ_lineinfile.py'
Jan 05 14:22:32 compute-0 sudo[65095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:32 compute-0 python3.9[65097]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:32 compute-0 sudo[65095]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:33 compute-0 sudo[65249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcthnurkrqosbycagnbzqzqibtdddieo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622953.3220227-283-6313687370373/AnsiballZ_setup.py'
Jan 05 14:22:33 compute-0 sudo[65249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:33 compute-0 python3.9[65251]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:22:34 compute-0 sudo[65249]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:34 compute-0 sudo[65333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pketckpzzckogodziwikzcamgofnbrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622953.3220227-283-6313687370373/AnsiballZ_systemd.py'
Jan 05 14:22:34 compute-0 sudo[65333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:35 compute-0 python3.9[65335]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:22:35 compute-0 sudo[65333]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:36 compute-0 sudo[65487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrndsqokxjiqnfjtfnquaeltdolqqlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622955.8344817-299-272010399929715/AnsiballZ_setup.py'
Jan 05 14:22:36 compute-0 sudo[65487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:36 compute-0 python3.9[65489]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:22:36 compute-0 sudo[65487]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:37 compute-0 sudo[65571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxwpvlsrnwtommgodimjepltbrunqeho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622955.8344817-299-272010399929715/AnsiballZ_systemd.py'
Jan 05 14:22:37 compute-0 sudo[65571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:37 compute-0 python3.9[65573]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:22:37 compute-0 chronyd[785]: chronyd exiting
Jan 05 14:22:37 compute-0 systemd[1]: Stopping NTP client/server...
Jan 05 14:22:37 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 05 14:22:37 compute-0 systemd[1]: Stopped NTP client/server.
Jan 05 14:22:37 compute-0 systemd[1]: Starting NTP client/server...
Jan 05 14:22:37 compute-0 chronyd[65581]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 05 14:22:37 compute-0 chronyd[65581]: Frequency -26.861 +/- 0.303 ppm read from /var/lib/chrony/drift
Jan 05 14:22:37 compute-0 chronyd[65581]: Loaded seccomp filter (level 2)
Jan 05 14:22:37 compute-0 systemd[1]: Started NTP client/server.
Jan 05 14:22:37 compute-0 sudo[65571]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:37 compute-0 sshd-session[60725]: Connection closed by 192.168.122.30 port 54272
Jan 05 14:22:37 compute-0 sshd-session[60722]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:22:37 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 05 14:22:37 compute-0 systemd[1]: session-14.scope: Consumed 28.909s CPU time.
Jan 05 14:22:37 compute-0 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Jan 05 14:22:37 compute-0 systemd-logind[795]: Removed session 14.
Jan 05 14:22:44 compute-0 sshd-session[65607]: Accepted publickey for zuul from 192.168.122.30 port 49036 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:22:44 compute-0 systemd-logind[795]: New session 15 of user zuul.
Jan 05 14:22:44 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 05 14:22:44 compute-0 sshd-session[65607]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:22:45 compute-0 python3.9[65760]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:22:46 compute-0 sudo[65914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjqadfvermwdakloxaflvfxrupgquckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622965.7182505-33-277450030900843/AnsiballZ_file.py'
Jan 05 14:22:46 compute-0 sudo[65914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:46 compute-0 python3.9[65916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:46 compute-0 sudo[65914]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:47 compute-0 sudo[66089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwskshyoqfojunwndsugdedbzmmkzfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622966.6739752-41-279612538973555/AnsiballZ_stat.py'
Jan 05 14:22:47 compute-0 sudo[66089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:47 compute-0 python3.9[66091]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:47 compute-0 sudo[66089]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:47 compute-0 sudo[66167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvmhdccegyrvfuptmblkvvzbezvupion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622966.6739752-41-279612538973555/AnsiballZ_file.py'
Jan 05 14:22:47 compute-0 sudo[66167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:48 compute-0 python3.9[66169]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.c0ry8mf6 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:48 compute-0 sudo[66167]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:48 compute-0 sudo[66319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gppjdqoxbnkiobhflnsvhqmkwwcsmgdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622968.4264941-61-155455772300627/AnsiballZ_stat.py'
Jan 05 14:22:48 compute-0 sudo[66319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:49 compute-0 python3.9[66321]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:49 compute-0 sudo[66319]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:49 compute-0 sudo[66442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkwxnonvyjvxdsasjubwljhciijcxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622968.4264941-61-155455772300627/AnsiballZ_copy.py'
Jan 05 14:22:49 compute-0 sudo[66442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:49 compute-0 python3.9[66444]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622968.4264941-61-155455772300627/.source _original_basename=.bjd69or9 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:49 compute-0 sudo[66442]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:50 compute-0 sudo[66594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptkhlddzgfmavrzerxadghzyfmlfcad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622970.1270313-77-10186305558242/AnsiballZ_file.py'
Jan 05 14:22:50 compute-0 sudo[66594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:50 compute-0 python3.9[66596]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:50 compute-0 sudo[66594]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:51 compute-0 sudo[66746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwmepllmuckptkdolvoqifkyrxthaqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622970.8915088-85-259880042546618/AnsiballZ_stat.py'
Jan 05 14:22:51 compute-0 sudo[66746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:51 compute-0 python3.9[66748]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:51 compute-0 sudo[66746]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:51 compute-0 sudo[66869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghwjukvwyvfeeldtqneckjkuiatbvjtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622970.8915088-85-259880042546618/AnsiballZ_copy.py'
Jan 05 14:22:51 compute-0 sudo[66869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:52 compute-0 python3.9[66871]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767622970.8915088-85-259880042546618/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:52 compute-0 sudo[66869]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:52 compute-0 sudo[67022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzkbqncbctgqlfbuepqvlskcojwzqmri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622972.2911642-85-63329294203138/AnsiballZ_stat.py'
Jan 05 14:22:52 compute-0 sudo[67022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:52 compute-0 python3.9[67024]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:52 compute-0 sudo[67022]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:53 compute-0 sudo[67145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxwgynkktlbcdknrswgyadgfsddoaraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622972.2911642-85-63329294203138/AnsiballZ_copy.py'
Jan 05 14:22:53 compute-0 sudo[67145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:53 compute-0 python3.9[67147]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767622972.2911642-85-63329294203138/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:22:53 compute-0 sudo[67145]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:54 compute-0 sudo[67297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beniysnjqqepwxhiilybttukcjrvbfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622973.7114544-114-196800829032434/AnsiballZ_file.py'
Jan 05 14:22:54 compute-0 sudo[67297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:54 compute-0 python3.9[67299]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:54 compute-0 sudo[67297]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:54 compute-0 sudo[67449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrovslegnfprkkzydrjiddwyvbztqrok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622974.4286497-122-48800827935827/AnsiballZ_stat.py'
Jan 05 14:22:54 compute-0 sudo[67449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:54 compute-0 python3.9[67451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:54 compute-0 sudo[67449]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:55 compute-0 sudo[67572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbkeasxwejnqhobawjowdrktyzbuolkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622974.4286497-122-48800827935827/AnsiballZ_copy.py'
Jan 05 14:22:55 compute-0 sudo[67572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:55 compute-0 python3.9[67574]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622974.4286497-122-48800827935827/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:55 compute-0 sudo[67572]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:56 compute-0 sudo[67724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlrbvptetixcvgekaurzsxfxepbxbzjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622975.700573-137-193071376858376/AnsiballZ_stat.py'
Jan 05 14:22:56 compute-0 sudo[67724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:56 compute-0 python3.9[67726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:56 compute-0 sudo[67724]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:56 compute-0 sudo[67847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uicvcrooxyjlsqzesbcphiwgojxuplpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622975.700573-137-193071376858376/AnsiballZ_copy.py'
Jan 05 14:22:56 compute-0 sudo[67847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:56 compute-0 python3.9[67849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622975.700573-137-193071376858376/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:22:56 compute-0 sudo[67847]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:57 compute-0 sudo[67999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdkmlhmbvkkvxxmbkyujsajxionciybf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622977.1419032-152-203401945583525/AnsiballZ_systemd.py'
Jan 05 14:22:57 compute-0 sudo[67999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:58 compute-0 python3.9[68001]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:22:58 compute-0 systemd[1]: Reloading.
Jan 05 14:22:58 compute-0 systemd-rc-local-generator[68028]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:22:58 compute-0 systemd-sysv-generator[68034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:22:58 compute-0 systemd[1]: Reloading.
Jan 05 14:22:58 compute-0 systemd-sysv-generator[68071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:22:58 compute-0 systemd-rc-local-generator[68068]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:22:58 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 05 14:22:58 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 05 14:22:58 compute-0 sudo[67999]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:59 compute-0 sudo[68230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvdanpmsttepdwojjaggrsvwhbambfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622978.920746-160-112379286624208/AnsiballZ_stat.py'
Jan 05 14:22:59 compute-0 sudo[68230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:22:59 compute-0 python3.9[68232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:22:59 compute-0 sudo[68230]: pam_unix(sudo:session): session closed for user root
Jan 05 14:22:59 compute-0 sudo[68353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbqahgqusqefnsinrpehkxbhegruvzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622978.920746-160-112379286624208/AnsiballZ_copy.py'
Jan 05 14:22:59 compute-0 sudo[68353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:00 compute-0 python3.9[68355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622978.920746-160-112379286624208/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:00 compute-0 sudo[68353]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:00 compute-0 sudo[68505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikcblpodspkxynmmjntoednheptbytfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622980.2807617-175-254471791591045/AnsiballZ_stat.py'
Jan 05 14:23:00 compute-0 sudo[68505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:00 compute-0 python3.9[68507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:00 compute-0 sudo[68505]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:01 compute-0 sudo[68628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwadvukqznupfuqlvybdzyzxzuflnqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622980.2807617-175-254471791591045/AnsiballZ_copy.py'
Jan 05 14:23:01 compute-0 sudo[68628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:01 compute-0 python3.9[68630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622980.2807617-175-254471791591045/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:01 compute-0 sudo[68628]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:02 compute-0 sudo[68780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elbevgerslhtdfvtumqfygtvtszsmbzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622981.7155402-190-224659579906428/AnsiballZ_systemd.py'
Jan 05 14:23:02 compute-0 sudo[68780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:02 compute-0 python3.9[68782]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:23:02 compute-0 systemd[1]: Reloading.
Jan 05 14:23:02 compute-0 systemd-sysv-generator[68813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:23:02 compute-0 systemd-rc-local-generator[68810]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:23:02 compute-0 systemd[1]: Reloading.
Jan 05 14:23:02 compute-0 systemd-sysv-generator[68846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:23:02 compute-0 systemd-rc-local-generator[68842]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:23:02 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 14:23:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 14:23:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 14:23:02 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 14:23:02 compute-0 sudo[68780]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:03 compute-0 python3.9[69008]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:23:03 compute-0 network[69025]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:23:03 compute-0 network[69026]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:23:03 compute-0 network[69027]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:23:08 compute-0 sudo[69287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ededhgantghmbwsezjctintwcuucdysd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622988.515231-206-261477007070703/AnsiballZ_systemd.py'
Jan 05 14:23:08 compute-0 sudo[69287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:09 compute-0 python3.9[69289]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:23:09 compute-0 systemd[1]: Reloading.
Jan 05 14:23:09 compute-0 systemd-sysv-generator[69320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:23:09 compute-0 systemd-rc-local-generator[69316]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:23:09 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 05 14:23:09 compute-0 iptables.init[69328]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 05 14:23:09 compute-0 iptables.init[69328]: iptables: Flushing firewall rules: [  OK  ]
Jan 05 14:23:09 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 05 14:23:09 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 05 14:23:09 compute-0 sudo[69287]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:10 compute-0 sudo[69522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upohkvaxfjlsahjoejagacbfxfwvvqte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622990.065776-206-279132000211762/AnsiballZ_systemd.py'
Jan 05 14:23:10 compute-0 sudo[69522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:10 compute-0 python3.9[69524]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:23:10 compute-0 sudo[69522]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:11 compute-0 sudo[69676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoeulstjmixhaimgttlaynefjjhaifrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622991.0641747-222-53086010254235/AnsiballZ_systemd.py'
Jan 05 14:23:11 compute-0 sudo[69676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:11 compute-0 python3.9[69678]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:23:11 compute-0 systemd[1]: Reloading.
Jan 05 14:23:11 compute-0 systemd-rc-local-generator[69708]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:23:11 compute-0 systemd-sysv-generator[69713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:23:12 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 05 14:23:12 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 05 14:23:12 compute-0 sudo[69676]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:12 compute-0 sudo[69868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxiazzhduxjadjjmoizirayaxvioazw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622992.3271663-230-179700071133417/AnsiballZ_command.py'
Jan 05 14:23:12 compute-0 sudo[69868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:13 compute-0 python3.9[69870]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:23:13 compute-0 sudo[69868]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:13 compute-0 sudo[70021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uulsakuytcfzkqiebebrgmlgxhfoqxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622993.543828-244-125300525273150/AnsiballZ_stat.py'
Jan 05 14:23:13 compute-0 sudo[70021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:14 compute-0 python3.9[70023]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:14 compute-0 sudo[70021]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:14 compute-0 sudo[70146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uammfjmwykalfbltpzazoihujujfbaye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622993.543828-244-125300525273150/AnsiballZ_copy.py'
Jan 05 14:23:14 compute-0 sudo[70146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:14 compute-0 python3.9[70148]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767622993.543828-244-125300525273150/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:14 compute-0 sudo[70146]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:15 compute-0 sudo[70299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghzonwkqesxkvhygusjeshudlwcqcih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622995.0261195-259-38498999980691/AnsiballZ_systemd.py'
Jan 05 14:23:15 compute-0 sudo[70299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:15 compute-0 python3.9[70301]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:23:15 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 05 14:23:15 compute-0 sshd[1006]: Received SIGHUP; restarting.
Jan 05 14:23:15 compute-0 sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 05 14:23:15 compute-0 sshd[1006]: Server listening on :: port 22.
Jan 05 14:23:15 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 05 14:23:15 compute-0 sudo[70299]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:16 compute-0 sudo[70455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nectycstpjzrecegnkxskzjrgaeeptzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622995.9267092-267-228453254386387/AnsiballZ_file.py'
Jan 05 14:23:16 compute-0 sudo[70455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:16 compute-0 python3.9[70457]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:16 compute-0 sudo[70455]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:16 compute-0 sudo[70607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niysvsjkaiapuzwusonxfeycnldiwoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622996.6498363-275-142941205569924/AnsiballZ_stat.py'
Jan 05 14:23:16 compute-0 sudo[70607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:17 compute-0 python3.9[70609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:17 compute-0 sudo[70607]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:17 compute-0 sudo[70730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pasdroeygxuvusnnilxjhgftfootyjhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622996.6498363-275-142941205569924/AnsiballZ_copy.py'
Jan 05 14:23:17 compute-0 sudo[70730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:17 compute-0 python3.9[70732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767622996.6498363-275-142941205569924/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:17 compute-0 sudo[70730]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:18 compute-0 sudo[70882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnthipqzxiokxwszpawgprfyxdgwegb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622998.1842542-293-8351351943322/AnsiballZ_timezone.py'
Jan 05 14:23:18 compute-0 sudo[70882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:18 compute-0 python3.9[70884]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 05 14:23:19 compute-0 systemd[1]: Starting Time & Date Service...
Jan 05 14:23:19 compute-0 systemd[1]: Started Time & Date Service.
Jan 05 14:23:19 compute-0 sudo[70882]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:19 compute-0 sudo[71038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-berfyhmvgkzxcagdmmbsmchnbyorrzwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767622999.4168043-302-174325078302229/AnsiballZ_file.py'
Jan 05 14:23:19 compute-0 sudo[71038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:20 compute-0 python3.9[71040]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:20 compute-0 sudo[71038]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:20 compute-0 sudo[71190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqeguuxcxtyinsfzkmsqpekhnpxaibz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623000.2203176-310-86631956937027/AnsiballZ_stat.py'
Jan 05 14:23:20 compute-0 sudo[71190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:20 compute-0 python3.9[71192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:20 compute-0 sudo[71190]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:21 compute-0 sudo[71313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orhhbybacysqkkndyxmzkwhtfqvvzjxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623000.2203176-310-86631956937027/AnsiballZ_copy.py'
Jan 05 14:23:21 compute-0 sudo[71313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:21 compute-0 python3.9[71315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623000.2203176-310-86631956937027/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:21 compute-0 sudo[71313]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:21 compute-0 sudo[71465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ordhqdhvnhvoboxjwuriwzbrieqytagh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623001.6020086-325-160641019178013/AnsiballZ_stat.py'
Jan 05 14:23:21 compute-0 sudo[71465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:22 compute-0 python3.9[71467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:22 compute-0 sudo[71465]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:22 compute-0 sudo[71588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjxolovannozrdrtbkymrsykcbygqtex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623001.6020086-325-160641019178013/AnsiballZ_copy.py'
Jan 05 14:23:22 compute-0 sudo[71588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:22 compute-0 python3.9[71590]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623001.6020086-325-160641019178013/.source.yaml _original_basename=.niu5ps3f follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:22 compute-0 sudo[71588]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:23 compute-0 sudo[71740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrhfiibgcggujugxfdnskdiiqzisbgnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623002.9925895-340-164621785029574/AnsiballZ_stat.py'
Jan 05 14:23:23 compute-0 sudo[71740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:23 compute-0 python3.9[71742]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:23 compute-0 sudo[71740]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:24 compute-0 sudo[71863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcrovuezrjvwjslqgvdyfcsrvimzase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623002.9925895-340-164621785029574/AnsiballZ_copy.py'
Jan 05 14:23:24 compute-0 sudo[71863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:24 compute-0 python3.9[71865]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623002.9925895-340-164621785029574/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:24 compute-0 sudo[71863]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:24 compute-0 sudo[72015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqsvfggnnnhudrlekytqwfxltytiwaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623004.441741-355-235087761114710/AnsiballZ_command.py'
Jan 05 14:23:24 compute-0 sudo[72015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:24 compute-0 python3.9[72017]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:23:24 compute-0 sudo[72015]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:25 compute-0 sudo[72168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxgkvemaqkuxjdrqdqcpygufhplcxop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623005.2645395-363-82479544449756/AnsiballZ_command.py'
Jan 05 14:23:25 compute-0 sudo[72168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:25 compute-0 python3.9[72170]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:23:25 compute-0 sudo[72168]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:26 compute-0 sudo[72321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klnwlemhrdkvsxtlvusxqorottxpmpih ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623005.9919116-371-111335246076196/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 14:23:26 compute-0 sudo[72321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:26 compute-0 python3[72323]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 14:23:26 compute-0 sudo[72321]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:27 compute-0 sudo[72473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfffytstcwgkprlxvyfpcidjbyrisdtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623006.8703783-379-202894374982985/AnsiballZ_stat.py'
Jan 05 14:23:27 compute-0 sudo[72473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:27 compute-0 python3.9[72475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:27 compute-0 sudo[72473]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:27 compute-0 sudo[72596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyogwfaubkvqhatjshzngsyfurunveef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623006.8703783-379-202894374982985/AnsiballZ_copy.py'
Jan 05 14:23:27 compute-0 sudo[72596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:28 compute-0 python3.9[72598]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623006.8703783-379-202894374982985/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:28 compute-0 sudo[72596]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:28 compute-0 sudo[72748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akersfhzcjpnjkdevtqicqlqqktqxuqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623008.2362182-394-163903124732966/AnsiballZ_stat.py'
Jan 05 14:23:28 compute-0 sudo[72748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:28 compute-0 python3.9[72750]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:28 compute-0 sudo[72748]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:29 compute-0 sudo[72871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkngngtcekoutbnrbzjcicycwxjmjtnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623008.2362182-394-163903124732966/AnsiballZ_copy.py'
Jan 05 14:23:29 compute-0 sudo[72871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:29 compute-0 python3.9[72873]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623008.2362182-394-163903124732966/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:29 compute-0 sudo[72871]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:29 compute-0 sudo[73023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhlcumaixodpdwiqscctbtiennmdzlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623009.6168997-409-152827620796235/AnsiballZ_stat.py'
Jan 05 14:23:29 compute-0 sudo[73023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:30 compute-0 python3.9[73025]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:30 compute-0 sudo[73023]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:30 compute-0 sudo[73146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwlqhnstktfsusfmwjsqtjvwfvaxgpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623009.6168997-409-152827620796235/AnsiballZ_copy.py'
Jan 05 14:23:30 compute-0 sudo[73146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:30 compute-0 python3.9[73148]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623009.6168997-409-152827620796235/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:30 compute-0 sudo[73146]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:31 compute-0 sudo[73298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luhdpjouzbcbnmvidohrpoveqjrglill ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623011.0614169-424-174035106045422/AnsiballZ_stat.py'
Jan 05 14:23:31 compute-0 sudo[73298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:31 compute-0 python3.9[73300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:31 compute-0 sudo[73298]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:32 compute-0 sudo[73421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imyvtuwpvwenatrcpmgvamwzylfczjni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623011.0614169-424-174035106045422/AnsiballZ_copy.py'
Jan 05 14:23:32 compute-0 sudo[73421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:32 compute-0 python3.9[73423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623011.0614169-424-174035106045422/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:32 compute-0 sudo[73421]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:32 compute-0 sudo[73573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlygwslyxqmezqzmjcqtucjphojnfjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623012.5503109-439-138959647310663/AnsiballZ_stat.py'
Jan 05 14:23:32 compute-0 sudo[73573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:33 compute-0 python3.9[73575]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:23:33 compute-0 sudo[73573]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:33 compute-0 sudo[73696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snqurruxgxvitvnwphzyudyfgkepewvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623012.5503109-439-138959647310663/AnsiballZ_copy.py'
Jan 05 14:23:33 compute-0 sudo[73696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:33 compute-0 python3.9[73698]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623012.5503109-439-138959647310663/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:33 compute-0 sudo[73696]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:34 compute-0 sudo[73848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqvxxwnkcvpaqobftgkevmdpdzuexpat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623014.0995448-454-251920209413409/AnsiballZ_file.py'
Jan 05 14:23:34 compute-0 sudo[73848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:34 compute-0 python3.9[73850]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:34 compute-0 sudo[73848]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:35 compute-0 sudo[74000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmjlyrvgtmqzisruvjbswrtujxjlktzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623014.9261315-462-86741582636328/AnsiballZ_command.py'
Jan 05 14:23:35 compute-0 sudo[74000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:35 compute-0 python3.9[74002]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:23:35 compute-0 sudo[74000]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:36 compute-0 sudo[74159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oounzqjisuqthqgnnzrytzojoarjbrit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623015.8127794-470-140457894320005/AnsiballZ_blockinfile.py'
Jan 05 14:23:36 compute-0 sudo[74159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:36 compute-0 python3.9[74161]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:36 compute-0 sudo[74159]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:37 compute-0 sudo[74312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxxueqkjlsrmlyfftvvyfpbveywsztm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623016.880573-479-148587461315713/AnsiballZ_file.py'
Jan 05 14:23:37 compute-0 sudo[74312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:37 compute-0 python3.9[74314]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:37 compute-0 sudo[74312]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:37 compute-0 sudo[74464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqrgdqmnnjwddqeanguvhylpinzjrcwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623017.6011863-479-143065136380275/AnsiballZ_file.py'
Jan 05 14:23:37 compute-0 sudo[74464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:38 compute-0 python3.9[74466]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:38 compute-0 sudo[74464]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:39 compute-0 sudo[74616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knglnhkzgluiszgjvimatzvkcyudqqiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623018.4224381-494-89782952860936/AnsiballZ_mount.py'
Jan 05 14:23:39 compute-0 sudo[74616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:39 compute-0 python3.9[74618]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 05 14:23:39 compute-0 sudo[74616]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:23:39 compute-0 sudo[74770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vteehsiqfschmdhdywywscservcgncoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623019.5034847-494-135252503522469/AnsiballZ_mount.py'
Jan 05 14:23:39 compute-0 sudo[74770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:40 compute-0 python3.9[74772]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 05 14:23:40 compute-0 sudo[74770]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:40 compute-0 sshd-session[65610]: Connection closed by 192.168.122.30 port 49036
Jan 05 14:23:40 compute-0 sshd-session[65607]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:23:40 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 05 14:23:40 compute-0 systemd[1]: session-15.scope: Consumed 41.925s CPU time.
Jan 05 14:23:40 compute-0 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Jan 05 14:23:40 compute-0 systemd-logind[795]: Removed session 15.
Jan 05 14:23:45 compute-0 sshd-session[74799]: Accepted publickey for zuul from 192.168.122.30 port 56870 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:23:45 compute-0 systemd-logind[795]: New session 16 of user zuul.
Jan 05 14:23:45 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 05 14:23:45 compute-0 sshd-session[74799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:23:46 compute-0 sudo[74952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fophxkrcoroxfdyozlcofivsqxgxrlmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623026.0863655-16-97417087354297/AnsiballZ_tempfile.py'
Jan 05 14:23:46 compute-0 sudo[74952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:46 compute-0 python3.9[74954]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 05 14:23:46 compute-0 sudo[74952]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:47 compute-0 sudo[75104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwzyojbjzqqqdapibewxqptdvfwmnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623027.1792836-28-138744703386320/AnsiballZ_stat.py'
Jan 05 14:23:47 compute-0 sudo[75104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:47 compute-0 python3.9[75106]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:23:47 compute-0 sudo[75104]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:48 compute-0 sudo[75256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnhywebnuxuuknhrecanqjtnkzgnpsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623028.208965-38-90634054964833/AnsiballZ_setup.py'
Jan 05 14:23:48 compute-0 sudo[75256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:49 compute-0 python3.9[75258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:23:49 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 05 14:23:49 compute-0 sudo[75256]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:50 compute-0 sudo[75410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfendrlhujujrrmlpuggtomptebfbvgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623029.5239267-47-86755204197555/AnsiballZ_blockinfile.py'
Jan 05 14:23:50 compute-0 sudo[75410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:50 compute-0 python3.9[75412]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwv8f5qFA9ysDQ4x+0ORscXkNgBJ+OEYbdhxYNfQ4uQozR98zms/6GwonD1TvwpW7Njl1V01ih2W3mXdzMe9/6j4cSMIOQAMsTS4u/PAps54+z/BLRenBvAQP9OMYx/zIK1vzhXpSNwqeU7lokUf3u+FhQ6jbL3nGaSYNQ4XK/qOUazqZuz+rEYC6FFkP3is1TrNkhg4PV84KfoQARKxbi0sVnMMPFg2Hz4vSpfghAP70sjYsSqXIrSd0RovSfqLv9ygsryTlyDgPns4I8LudOjrI+h1ppOpM9CnqqEg1bxZ8au9Q5YdypBMjw5BI6paivl2EL5vqOsDrJa5Xsu1yqsx2rHeQjrxjyncUmUZnquyhHIjej/2EEZYPbOHelDYcNmCD4Oyd0BGiEqkcbTHWpxMYQH78uPJW5ryHO7aiO+QbOGLngckZkZvDMoWJ2wjcI8meIwf9idKNXyEDwGAIcEmTFmNMWBbiNlNH6tKQngzUw3O5s2SXgpx4n6MrHcd8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2j1vuoR9kT/POM24u3nKf5UURYmnXt3xRp61hP4/a2
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLZaDDmS46jcPgeYpnNeAllUbY39zCaKF85b20N8Hj/2pPDfpYdwIoFIvdo3216zVLwCh8ikMJxEuCoURw9eFes=
                                             create=True mode=0644 path=/tmp/ansible.ypik2g9z state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:50 compute-0 sudo[75410]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:51 compute-0 sudo[75562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgimjtgnhcfjohnylcsynuimcbvnqza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623030.5098872-55-198363119515061/AnsiballZ_command.py'
Jan 05 14:23:51 compute-0 sudo[75562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:51 compute-0 python3.9[75564]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ypik2g9z' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:23:51 compute-0 sudo[75562]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:52 compute-0 sudo[75716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydgjjohnxqymrhtypezcxcstfnuswlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623031.4805021-63-71400518215735/AnsiballZ_file.py'
Jan 05 14:23:52 compute-0 sudo[75716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:23:52 compute-0 python3.9[75718]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ypik2g9z state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:23:52 compute-0 sudo[75716]: pam_unix(sudo:session): session closed for user root
Jan 05 14:23:52 compute-0 sshd-session[74802]: Connection closed by 192.168.122.30 port 56870
Jan 05 14:23:52 compute-0 sshd-session[74799]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:23:52 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 05 14:23:52 compute-0 systemd[1]: session-16.scope: Consumed 4.340s CPU time.
Jan 05 14:23:52 compute-0 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Jan 05 14:23:52 compute-0 systemd-logind[795]: Removed session 16.
Jan 05 14:23:58 compute-0 sshd-session[75743]: Accepted publickey for zuul from 192.168.122.30 port 55788 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:23:58 compute-0 systemd-logind[795]: New session 17 of user zuul.
Jan 05 14:23:58 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 05 14:23:58 compute-0 sshd-session[75743]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:23:59 compute-0 python3.9[75896]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:24:00 compute-0 sudo[76050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cecicdipqximxthawiehlcclutpphgbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623039.520032-32-127373040130902/AnsiballZ_systemd.py'
Jan 05 14:24:00 compute-0 sudo[76050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:00 compute-0 python3.9[76052]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 05 14:24:00 compute-0 sudo[76050]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:01 compute-0 sudo[76204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifshugkhwjwhmqbixldzbusnlxzffacy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623040.6247377-40-118135913952411/AnsiballZ_systemd.py'
Jan 05 14:24:01 compute-0 sudo[76204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:01 compute-0 python3.9[76206]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:24:01 compute-0 sudo[76204]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:02 compute-0 sudo[76357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huavgridkkwhppmdxhslqvxtwtqsopwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623041.6119623-49-135924241468141/AnsiballZ_command.py'
Jan 05 14:24:02 compute-0 sudo[76357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:02 compute-0 python3.9[76359]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:24:02 compute-0 sudo[76357]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:03 compute-0 sudo[76510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qstaqrvtgnecghkmxdkpsunsqaeqgunr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623042.5332222-57-170662433579752/AnsiballZ_stat.py'
Jan 05 14:24:03 compute-0 sudo[76510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:03 compute-0 python3.9[76512]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:24:03 compute-0 sudo[76510]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:03 compute-0 sudo[76664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzmuejxechcadxqjgpelnwthxoxthxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623043.4787753-65-192618540969386/AnsiballZ_command.py'
Jan 05 14:24:03 compute-0 sudo[76664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:04 compute-0 python3.9[76666]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:24:04 compute-0 sudo[76664]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:04 compute-0 sudo[76819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjpgqwozvvjjkxugyqchuyrcjhpuydh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623044.2296815-73-143942188385052/AnsiballZ_file.py'
Jan 05 14:24:04 compute-0 sudo[76819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:04 compute-0 python3.9[76821]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:04 compute-0 sudo[76819]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:05 compute-0 sshd-session[75746]: Connection closed by 192.168.122.30 port 55788
Jan 05 14:24:05 compute-0 sshd-session[75743]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:24:05 compute-0 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Jan 05 14:24:05 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 05 14:24:05 compute-0 systemd[1]: session-17.scope: Consumed 4.974s CPU time.
Jan 05 14:24:05 compute-0 systemd-logind[795]: Removed session 17.
Jan 05 14:24:11 compute-0 sshd-session[76846]: Accepted publickey for zuul from 192.168.122.30 port 33342 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:24:11 compute-0 systemd-logind[795]: New session 18 of user zuul.
Jan 05 14:24:11 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 05 14:24:11 compute-0 sshd-session[76846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:24:12 compute-0 python3.9[76999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:24:13 compute-0 sudo[77153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beekzcqhnjknxbasflohreyymijcvdzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623053.4653878-34-133263623709183/AnsiballZ_setup.py'
Jan 05 14:24:13 compute-0 sudo[77153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:14 compute-0 python3.9[77155]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:24:14 compute-0 sudo[77153]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:14 compute-0 sudo[77237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdvdvijnrinqqrfhlndvqbkaudodmifr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623053.4653878-34-133263623709183/AnsiballZ_dnf.py'
Jan 05 14:24:14 compute-0 sudo[77237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:15 compute-0 python3.9[77239]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 14:24:16 compute-0 sudo[77237]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:17 compute-0 python3.9[77390]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:24:18 compute-0 python3.9[77541]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:24:19 compute-0 python3.9[77691]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:24:20 compute-0 python3.9[77841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:24:20 compute-0 sshd-session[76849]: Connection closed by 192.168.122.30 port 33342
Jan 05 14:24:20 compute-0 sshd-session[76846]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:24:20 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 05 14:24:20 compute-0 systemd[1]: session-18.scope: Consumed 6.538s CPU time.
Jan 05 14:24:20 compute-0 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Jan 05 14:24:20 compute-0 systemd-logind[795]: Removed session 18.
Jan 05 14:24:27 compute-0 sshd-session[77866]: Accepted publickey for zuul from 192.168.122.30 port 39212 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:24:27 compute-0 systemd-logind[795]: New session 19 of user zuul.
Jan 05 14:24:27 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 05 14:24:27 compute-0 sshd-session[77866]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:24:28 compute-0 python3.9[78019]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:24:29 compute-0 sudo[78173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bginlyjtqfrgtioljkenhoxxprtwtsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623069.3018558-50-130910452353416/AnsiballZ_file.py'
Jan 05 14:24:29 compute-0 sudo[78173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:30 compute-0 python3.9[78175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:30 compute-0 sudo[78173]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:30 compute-0 sudo[78325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjvfetrueckxutjszogumtoqcezijgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623070.4624517-50-279490480149204/AnsiballZ_file.py'
Jan 05 14:24:30 compute-0 sudo[78325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:31 compute-0 python3.9[78327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:31 compute-0 sudo[78325]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:31 compute-0 sudo[78477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdkiruefyrayqlaioojjigqsdcgevwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623071.3267663-65-44718720235573/AnsiballZ_stat.py'
Jan 05 14:24:31 compute-0 sudo[78477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:32 compute-0 python3.9[78479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:32 compute-0 sudo[78477]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:32 compute-0 sudo[78600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyuogqoyoyvtpktifdeostwjyiyoonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623071.3267663-65-44718720235573/AnsiballZ_copy.py'
Jan 05 14:24:32 compute-0 sudo[78600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:32 compute-0 python3.9[78602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623071.3267663-65-44718720235573/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=98274112fdf004a989e8392795a320d0a3eb1a0e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:32 compute-0 sudo[78600]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:33 compute-0 sudo[78752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvezvswgyldolvlcidjumnnnbsglmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623073.0587707-65-81601768848785/AnsiballZ_stat.py'
Jan 05 14:24:33 compute-0 sudo[78752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:33 compute-0 python3.9[78754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:33 compute-0 sudo[78752]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:34 compute-0 sudo[78875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrvqbmvgspngrjsaofqbjnjrdggcbrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623073.0587707-65-81601768848785/AnsiballZ_copy.py'
Jan 05 14:24:34 compute-0 sudo[78875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:34 compute-0 python3.9[78877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623073.0587707-65-81601768848785/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=132f787b54ae0184c65802727d4a0550dbdc1d4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:34 compute-0 sudo[78875]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:34 compute-0 sudo[79027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-witftcuupuvzaacbjotnvncnpnjzqrjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623074.5959263-65-117994784667852/AnsiballZ_stat.py'
Jan 05 14:24:34 compute-0 sudo[79027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:35 compute-0 python3.9[79029]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:35 compute-0 sudo[79027]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:35 compute-0 sudo[79150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irssailpaabybbdfqqcjwzhfjjffdbxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623074.5959263-65-117994784667852/AnsiballZ_copy.py'
Jan 05 14:24:35 compute-0 sudo[79150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:35 compute-0 python3.9[79152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623074.5959263-65-117994784667852/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=98b2140f99bd8f8e5a08a1500a7a2e88b530669a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:35 compute-0 sudo[79150]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:36 compute-0 sudo[79302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfvateqspqpylzyhdzawdhgyawrxzjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623076.0845976-109-153561880475750/AnsiballZ_file.py'
Jan 05 14:24:36 compute-0 sudo[79302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:36 compute-0 python3.9[79304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:36 compute-0 sudo[79302]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:37 compute-0 sudo[79454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqniqsvdwrivyffcqddpmqcdkrykqfly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623076.8461955-109-200577279900919/AnsiballZ_file.py'
Jan 05 14:24:37 compute-0 sudo[79454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:37 compute-0 python3.9[79456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:37 compute-0 sudo[79454]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:37 compute-0 sudo[79606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktpsnhcgojirwifjjyyfikrrxkyftzjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623077.6178584-124-197073519423060/AnsiballZ_stat.py'
Jan 05 14:24:38 compute-0 sudo[79606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:38 compute-0 python3.9[79608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:38 compute-0 sudo[79606]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:38 compute-0 sudo[79729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemhgltdcaapuekbgtomnnhlumpvoibw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623077.6178584-124-197073519423060/AnsiballZ_copy.py'
Jan 05 14:24:38 compute-0 sudo[79729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:38 compute-0 python3.9[79731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623077.6178584-124-197073519423060/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=db2566941e74b78573205dfd357792bf78423ebc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:38 compute-0 sudo[79729]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:39 compute-0 sudo[79881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekfigqkhktvqjdwzwjgkhoxjjftttrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623079.050742-124-223839001285453/AnsiballZ_stat.py'
Jan 05 14:24:39 compute-0 sudo[79881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:39 compute-0 python3.9[79883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:39 compute-0 sudo[79881]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:40 compute-0 sudo[80004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kndfpmdajucdmxvwtatufppjpjwpfvgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623079.050742-124-223839001285453/AnsiballZ_copy.py'
Jan 05 14:24:40 compute-0 sudo[80004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:40 compute-0 python3.9[80006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623079.050742-124-223839001285453/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=132f787b54ae0184c65802727d4a0550dbdc1d4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:40 compute-0 sudo[80004]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:40 compute-0 sudo[80156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-didrpohmfthrhjxdpbpbwolnaygafprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623080.4461787-124-273827112629216/AnsiballZ_stat.py'
Jan 05 14:24:40 compute-0 sudo[80156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:41 compute-0 python3.9[80158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:41 compute-0 sudo[80156]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:41 compute-0 sudo[80279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffaebxrpfdhokctfzerfjuwibbvejyhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623080.4461787-124-273827112629216/AnsiballZ_copy.py'
Jan 05 14:24:41 compute-0 sudo[80279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:41 compute-0 python3.9[80281]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623080.4461787-124-273827112629216/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9948dc9ab6c63a28376db483519a394f3bf98067 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:41 compute-0 sudo[80279]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:42 compute-0 sudo[80431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjvlmbuszhxtzvaotowppygtkxllvkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623081.9665997-168-104284455374725/AnsiballZ_file.py'
Jan 05 14:24:42 compute-0 sudo[80431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:42 compute-0 python3.9[80433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:42 compute-0 sudo[80431]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:43 compute-0 sudo[80583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjiavzxorcvpsueidjijkcaucajyljru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623082.7703488-168-41770057278703/AnsiballZ_file.py'
Jan 05 14:24:43 compute-0 sudo[80583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:43 compute-0 python3.9[80585]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:43 compute-0 sudo[80583]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:44 compute-0 sudo[80735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obveejqxpvwkewsdlcspsnlivqqytzyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623083.6597028-183-113990834731325/AnsiballZ_stat.py'
Jan 05 14:24:44 compute-0 sudo[80735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:44 compute-0 python3.9[80737]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:44 compute-0 sudo[80735]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:44 compute-0 sudo[80858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqnvalsdoayfslvtemanwwwvuzppfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623083.6597028-183-113990834731325/AnsiballZ_copy.py'
Jan 05 14:24:44 compute-0 sudo[80858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:44 compute-0 python3.9[80860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623083.6597028-183-113990834731325/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4177989102a99c7a9701c22fd8245eb1c88f0962 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:44 compute-0 sudo[80858]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:45 compute-0 sudo[81010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrxdqvufjtvekhimpblamyarsxjpsiyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623085.1056736-183-94832934187017/AnsiballZ_stat.py'
Jan 05 14:24:45 compute-0 sudo[81010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:45 compute-0 python3.9[81012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:45 compute-0 sudo[81010]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:46 compute-0 sudo[81133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grqukshdxkqrhhctqihgewbpywxzbzbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623085.1056736-183-94832934187017/AnsiballZ_copy.py'
Jan 05 14:24:46 compute-0 sudo[81133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:46 compute-0 python3.9[81135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623085.1056736-183-94832934187017/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=18404b64fbeafb793cdcf08460cd83e88f3ff884 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:46 compute-0 sudo[81133]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:46 compute-0 sudo[81285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvjfuaucmkycsixdghegfjphphpnoag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623086.5663874-183-261945563994428/AnsiballZ_stat.py'
Jan 05 14:24:46 compute-0 sudo[81285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:47 compute-0 python3.9[81287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:47 compute-0 sudo[81285]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:47 compute-0 sudo[81408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmwrbybdtcpkzlgoffsswgxcrcfnyng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623086.5663874-183-261945563994428/AnsiballZ_copy.py'
Jan 05 14:24:47 compute-0 sudo[81408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:47 compute-0 python3.9[81410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623086.5663874-183-261945563994428/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1ec518276e12d98e321cfa8103dde25e216e40b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:47 compute-0 sudo[81408]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:48 compute-0 sudo[81560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnehyqnqzvbxnhbagcztephpscxwlpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623088.018924-227-13703411720170/AnsiballZ_file.py'
Jan 05 14:24:48 compute-0 sudo[81560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:48 compute-0 python3.9[81562]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:48 compute-0 sudo[81560]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:49 compute-0 sudo[81712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcehafxrferfkneviadugxjqvvszlry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623088.7559702-227-184938266042660/AnsiballZ_file.py'
Jan 05 14:24:49 compute-0 sudo[81712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:49 compute-0 python3.9[81714]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:49 compute-0 sudo[81712]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:50 compute-0 sudo[81864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaxddxcznwwjossynsqaigprzghfwxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623089.6053007-242-105039576030615/AnsiballZ_stat.py'
Jan 05 14:24:50 compute-0 sudo[81864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:50 compute-0 python3.9[81866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:50 compute-0 sudo[81864]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:50 compute-0 sudo[81987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyglhgbarnmkvqwrsvgikbvkjxgxrcig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623089.6053007-242-105039576030615/AnsiballZ_copy.py'
Jan 05 14:24:50 compute-0 sudo[81987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:50 compute-0 python3.9[81989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623089.6053007-242-105039576030615/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=70a4fda6da0b2a7131b1753d71da68aaba525358 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:50 compute-0 sudo[81987]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:51 compute-0 sshd-session[81990]: Invalid user solv from 165.22.168.95 port 52562
Jan 05 14:24:51 compute-0 sshd-session[81990]: Connection closed by invalid user solv 165.22.168.95 port 52562 [preauth]
Jan 05 14:24:51 compute-0 sudo[82141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jumnlujcnpwblggtlrgsyabmwnijujqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623091.0567489-242-29618185828039/AnsiballZ_stat.py'
Jan 05 14:24:51 compute-0 sudo[82141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:51 compute-0 python3.9[82143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:51 compute-0 sudo[82141]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:52 compute-0 sudo[82264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-widacerpctshysdvjljtxurigkmwrfca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623091.0567489-242-29618185828039/AnsiballZ_copy.py'
Jan 05 14:24:52 compute-0 sudo[82264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:52 compute-0 python3.9[82266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623091.0567489-242-29618185828039/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9fd0f8247a0a3ea30dc67f9b709a51c009221ca0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:52 compute-0 sudo[82264]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:52 compute-0 sudo[82416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvmtodosgqycqjnsyehnqwlfdcsgymyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623092.4691489-242-176420452194095/AnsiballZ_stat.py'
Jan 05 14:24:52 compute-0 sudo[82416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:52 compute-0 python3.9[82418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:52 compute-0 sudo[82416]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:53 compute-0 sudo[82539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjxtaknxgxrlfdzeqfvfvlltoryntfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623092.4691489-242-176420452194095/AnsiballZ_copy.py'
Jan 05 14:24:53 compute-0 sudo[82539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:53 compute-0 python3.9[82541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623092.4691489-242-176420452194095/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ab5830cb83d468d4844ea3c514c72c476c08e077 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:53 compute-0 sudo[82539]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:54 compute-0 sudo[82691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibxtekoguwktdtcovadouiommogtxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623093.9425428-286-54328973796639/AnsiballZ_file.py'
Jan 05 14:24:54 compute-0 sudo[82691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:54 compute-0 python3.9[82693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:54 compute-0 sudo[82691]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:55 compute-0 sudo[82843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnifdhqhdpiquulasxghdusamoriooek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623094.6363492-286-72261073102594/AnsiballZ_file.py'
Jan 05 14:24:55 compute-0 sudo[82843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:55 compute-0 python3.9[82845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:24:55 compute-0 sudo[82843]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:55 compute-0 sudo[82995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debszerlfwvlcsjsyomlmejtqkravmli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623095.4979832-301-138534770009383/AnsiballZ_stat.py'
Jan 05 14:24:55 compute-0 sudo[82995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:56 compute-0 python3.9[82997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:56 compute-0 sudo[82995]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:56 compute-0 sudo[83118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzfdzdpkhdrgbkzbqsoblbcpyhugubaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623095.4979832-301-138534770009383/AnsiballZ_copy.py'
Jan 05 14:24:56 compute-0 sudo[83118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:56 compute-0 python3.9[83120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623095.4979832-301-138534770009383/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=23b97dce603dff0dfcf07603f5e6131232d440b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:56 compute-0 sudo[83118]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:57 compute-0 sudo[83270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urupcszqjtcaapbmoxxveucimngfkkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623096.9375203-301-149968329275831/AnsiballZ_stat.py'
Jan 05 14:24:57 compute-0 sudo[83270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:57 compute-0 python3.9[83272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:57 compute-0 sudo[83270]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:57 compute-0 sudo[83393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxompshjnhurwumierdafmubmmuhexs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623096.9375203-301-149968329275831/AnsiballZ_copy.py'
Jan 05 14:24:57 compute-0 sudo[83393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:58 compute-0 python3.9[83395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623096.9375203-301-149968329275831/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=18404b64fbeafb793cdcf08460cd83e88f3ff884 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:58 compute-0 sudo[83393]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:58 compute-0 sudo[83545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfgdmtpvrpdxsyajjeaicrjkvnhhvrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623098.271141-301-276002166798968/AnsiballZ_stat.py'
Jan 05 14:24:58 compute-0 sudo[83545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:58 compute-0 python3.9[83547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:24:58 compute-0 sudo[83545]: pam_unix(sudo:session): session closed for user root
Jan 05 14:24:59 compute-0 sudo[83668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effnnmeilttfzzxvopyovycoshjhxhxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623098.271141-301-276002166798968/AnsiballZ_copy.py'
Jan 05 14:24:59 compute-0 sudo[83668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:24:59 compute-0 python3.9[83670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623098.271141-301-276002166798968/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f0a7ba0352fdbd839d80d92209eef9bf7efa5f26 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:24:59 compute-0 sudo[83668]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:00 compute-0 sudo[83820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwakvnpqdypnqolmdterqoeywmqbwmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623100.1716888-361-13934850997304/AnsiballZ_file.py'
Jan 05 14:25:00 compute-0 sudo[83820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:00 compute-0 python3.9[83822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:00 compute-0 sudo[83820]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:01 compute-0 sudo[83972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkacxlrhrqlmwogaihdjulumhsfhlgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623101.012171-369-18210784572672/AnsiballZ_stat.py'
Jan 05 14:25:01 compute-0 sudo[83972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:01 compute-0 python3.9[83974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:01 compute-0 sudo[83972]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:02 compute-0 sudo[84095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvbmpkrvxslvpdooqnuhauejsxdkdbuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623101.012171-369-18210784572672/AnsiballZ_copy.py'
Jan 05 14:25:02 compute-0 sudo[84095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:02 compute-0 python3.9[84097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623101.012171-369-18210784572672/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:02 compute-0 sudo[84095]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:02 compute-0 sudo[84247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijuekkohxdmurfnqzmxkzbjhirruywjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623102.6108654-385-208813791768753/AnsiballZ_file.py'
Jan 05 14:25:02 compute-0 sudo[84247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:03 compute-0 python3.9[84249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:03 compute-0 sudo[84247]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:03 compute-0 sudo[84399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbqhjlpptkrrlvjmhpftkarrpuibubua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623103.3926876-393-128001234833526/AnsiballZ_stat.py'
Jan 05 14:25:03 compute-0 sudo[84399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:03 compute-0 python3.9[84401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:03 compute-0 sudo[84399]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:04 compute-0 sudo[84522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfbafguxhhfbifextvkougiiifadtgdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623103.3926876-393-128001234833526/AnsiballZ_copy.py'
Jan 05 14:25:04 compute-0 sudo[84522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:04 compute-0 python3.9[84524]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623103.3926876-393-128001234833526/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:04 compute-0 sudo[84522]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:05 compute-0 sudo[84674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqnlrzgegadygfnygnhbtcshxiwudkju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623104.8926163-409-235674619594773/AnsiballZ_file.py'
Jan 05 14:25:05 compute-0 sudo[84674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:05 compute-0 python3.9[84676]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:05 compute-0 sudo[84674]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:06 compute-0 sudo[84826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otcncnygctkzfhtkjjellscgzokarcqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623105.878237-417-132908596050971/AnsiballZ_stat.py'
Jan 05 14:25:06 compute-0 sudo[84826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:06 compute-0 python3.9[84828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:06 compute-0 sudo[84826]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:06 compute-0 sudo[84949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siatscgiigdwlhpejnsrhkyfismiyicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623105.878237-417-132908596050971/AnsiballZ_copy.py'
Jan 05 14:25:06 compute-0 sudo[84949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:07 compute-0 python3.9[84951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623105.878237-417-132908596050971/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:07 compute-0 sudo[84949]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:07 compute-0 sudo[85101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djyjllfrsrwiuwkkmvgaejvshrymoatx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623107.2814722-433-265635330042102/AnsiballZ_file.py'
Jan 05 14:25:07 compute-0 sudo[85101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:07 compute-0 python3.9[85103]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:07 compute-0 sudo[85101]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:08 compute-0 sudo[85253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umanzcbqkjfcrlbamnoahoooeyfmzuso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623108.1359367-441-40821971615320/AnsiballZ_stat.py'
Jan 05 14:25:08 compute-0 sudo[85253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:08 compute-0 python3.9[85255]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:08 compute-0 sudo[85253]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:09 compute-0 sudo[85376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznqbxoiqrtqvypmugmkzqhyypzakpog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623108.1359367-441-40821971615320/AnsiballZ_copy.py'
Jan 05 14:25:09 compute-0 sudo[85376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:09 compute-0 python3.9[85378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623108.1359367-441-40821971615320/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:09 compute-0 sudo[85376]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:10 compute-0 sudo[85528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eptkwuvhefdktgrbqgmgpvulgrrvsjwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623109.648855-457-32515723965016/AnsiballZ_file.py'
Jan 05 14:25:10 compute-0 sudo[85528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:10 compute-0 python3.9[85530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:10 compute-0 sudo[85528]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:10 compute-0 sudo[85680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrogvahdwvvopdyhdynmvqgtgzczfrge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623110.4229531-465-162871687718528/AnsiballZ_stat.py'
Jan 05 14:25:10 compute-0 sudo[85680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:11 compute-0 python3.9[85682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:11 compute-0 sudo[85680]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:11 compute-0 sudo[85803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpnjobpscvaekkpjpkgocxmsqiryubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623110.4229531-465-162871687718528/AnsiballZ_copy.py'
Jan 05 14:25:11 compute-0 sudo[85803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:11 compute-0 python3.9[85805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623110.4229531-465-162871687718528/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:11 compute-0 sudo[85803]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:12 compute-0 sudo[85955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjckvxtryntapdigockgakxxbxjpmks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623112.0393124-481-65384203727553/AnsiballZ_file.py'
Jan 05 14:25:12 compute-0 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:12 compute-0 python3.9[85957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:12 compute-0 sudo[85955]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:13 compute-0 sudo[86107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbjpyqvfknqpxbxporuuewibggrbygmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623112.8863742-489-17340887038108/AnsiballZ_stat.py'
Jan 05 14:25:13 compute-0 sudo[86107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:13 compute-0 python3.9[86109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:13 compute-0 sudo[86107]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:13 compute-0 sudo[86230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spoddvfcvdqqvhradbpzzonybwhokkwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623112.8863742-489-17340887038108/AnsiballZ_copy.py'
Jan 05 14:25:13 compute-0 sudo[86230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:14 compute-0 python3.9[86232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623112.8863742-489-17340887038108/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:14 compute-0 sudo[86230]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:14 compute-0 sudo[86382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poeyrncvzfgtvejstqchdxvlglzgmaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623114.426226-505-154941710553138/AnsiballZ_file.py'
Jan 05 14:25:14 compute-0 sudo[86382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:14 compute-0 python3.9[86384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:14 compute-0 sudo[86382]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:15 compute-0 sudo[86534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmgyzosbypjdsikwlxbhshvfqralrby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623115.1792357-513-14537223784286/AnsiballZ_stat.py'
Jan 05 14:25:15 compute-0 sudo[86534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:15 compute-0 python3.9[86536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:15 compute-0 sudo[86534]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:16 compute-0 sudo[86657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnweuyfmkfrdrhfvfowolhfykhvydhnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623115.1792357-513-14537223784286/AnsiballZ_copy.py'
Jan 05 14:25:16 compute-0 sudo[86657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:16 compute-0 python3.9[86659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623115.1792357-513-14537223784286/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:16 compute-0 sudo[86657]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:17 compute-0 sudo[86809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukapeztfshvtnnnghzetgrdtcfngkkfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623116.7548447-529-115738997555857/AnsiballZ_file.py'
Jan 05 14:25:17 compute-0 sudo[86809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:17 compute-0 python3.9[86811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:17 compute-0 sudo[86809]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:17 compute-0 sudo[86961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytknejnjxofzqvjrdpvshjsjjwkohlif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623117.5615754-537-153347478127089/AnsiballZ_stat.py'
Jan 05 14:25:17 compute-0 sudo[86961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:18 compute-0 python3.9[86963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:18 compute-0 sudo[86961]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:18 compute-0 sudo[87084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdpeehpzrrjijncjouiivjdidfmcpno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623117.5615754-537-153347478127089/AnsiballZ_copy.py'
Jan 05 14:25:18 compute-0 sudo[87084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:18 compute-0 python3.9[87086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623117.5615754-537-153347478127089/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fb35b0cceb6bbb3806e5a7af9cadd640cd52197d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:18 compute-0 sudo[87084]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:19 compute-0 sshd-session[77869]: Connection closed by 192.168.122.30 port 39212
Jan 05 14:25:19 compute-0 sshd-session[77866]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:25:19 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 05 14:25:19 compute-0 systemd[1]: session-19.scope: Consumed 41.407s CPU time.
Jan 05 14:25:19 compute-0 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Jan 05 14:25:19 compute-0 systemd-logind[795]: Removed session 19.
Jan 05 14:25:24 compute-0 sshd-session[87111]: Accepted publickey for zuul from 192.168.122.30 port 60520 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:25:25 compute-0 systemd-logind[795]: New session 20 of user zuul.
Jan 05 14:25:25 compute-0 systemd[1]: Started Session 20 of User zuul.
Jan 05 14:25:25 compute-0 sshd-session[87111]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:25:26 compute-0 python3.9[87264]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:25:27 compute-0 sudo[87418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viaenwrbdkmqvguukhsyhfnllofvmpga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623126.7323916-34-198483645044870/AnsiballZ_file.py'
Jan 05 14:25:27 compute-0 sudo[87418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:27 compute-0 python3.9[87420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:27 compute-0 sudo[87418]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:28 compute-0 sudo[87570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cihyscxmdmlnmagxfitijkofikgefrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623127.7068942-34-162650805204168/AnsiballZ_file.py'
Jan 05 14:25:28 compute-0 sudo[87570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:28 compute-0 python3.9[87572]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:25:28 compute-0 sudo[87570]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:29 compute-0 python3.9[87722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:25:30 compute-0 sudo[87872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syxnbsvpjjxwubtstaexigdqexoapikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623129.5063174-57-64405286040766/AnsiballZ_seboolean.py'
Jan 05 14:25:30 compute-0 sudo[87872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:30 compute-0 python3.9[87874]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 05 14:25:31 compute-0 sudo[87872]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:32 compute-0 sudo[88028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wohwvsncxpxzqipcpyflqfbvrtvffqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623131.7230217-67-49344973941722/AnsiballZ_setup.py'
Jan 05 14:25:32 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 05 14:25:32 compute-0 sudo[88028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:32 compute-0 python3.9[88030]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:25:32 compute-0 sudo[88028]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:33 compute-0 sudo[88112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlkwmcjicnzruclakpxyiinoqkncxubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623131.7230217-67-49344973941722/AnsiballZ_dnf.py'
Jan 05 14:25:33 compute-0 sudo[88112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:33 compute-0 python3.9[88114]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:25:34 compute-0 sudo[88112]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:35 compute-0 sudo[88265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtpmsbyzhjnpwsqnzscezosodwlqhcly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623134.9821274-79-266698281416349/AnsiballZ_systemd.py'
Jan 05 14:25:35 compute-0 sudo[88265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:35 compute-0 python3.9[88267]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:25:36 compute-0 sudo[88265]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:36 compute-0 sudo[88420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxbwiiayotwgnvxpobyfgjifcincubm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623136.339557-87-220610559311781/AnsiballZ_edpm_nftables_snippet.py'
Jan 05 14:25:36 compute-0 sudo[88420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:37 compute-0 python3[88422]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 05 14:25:37 compute-0 sudo[88420]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:37 compute-0 sudo[88572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlgkiakjiilpeljybaddwgstnilmzpoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623137.4653082-96-15598970037635/AnsiballZ_file.py'
Jan 05 14:25:37 compute-0 sudo[88572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:38 compute-0 python3.9[88574]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:38 compute-0 sudo[88572]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:38 compute-0 sudo[88724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvbutgbhergoittckpuqqusniujqjaho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623138.2534251-104-176858371432936/AnsiballZ_stat.py'
Jan 05 14:25:38 compute-0 sudo[88724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:38 compute-0 python3.9[88726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:39 compute-0 sudo[88724]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:39 compute-0 sudo[88802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvhvracuplhudumdjftpfzhooeglfsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623138.2534251-104-176858371432936/AnsiballZ_file.py'
Jan 05 14:25:39 compute-0 sudo[88802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:39 compute-0 python3.9[88804]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:39 compute-0 sudo[88802]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:40 compute-0 sudo[88954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngorxyzobuvhcokrsghgpxulxspsvkol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623139.7854779-116-83480855283796/AnsiballZ_stat.py'
Jan 05 14:25:40 compute-0 sudo[88954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:40 compute-0 python3.9[88956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:40 compute-0 sudo[88954]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:40 compute-0 sudo[89032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqkdzxxigedyjzliuhzfxlqwpwugrewy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623139.7854779-116-83480855283796/AnsiballZ_file.py'
Jan 05 14:25:40 compute-0 sudo[89032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:40 compute-0 python3.9[89034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.01dsk_em recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:40 compute-0 sudo[89032]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:41 compute-0 sudo[89184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieqawlswndqkktfyrmeapsijpnjlcssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623141.124686-128-182754220564887/AnsiballZ_stat.py'
Jan 05 14:25:41 compute-0 sudo[89184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:41 compute-0 python3.9[89186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:41 compute-0 sudo[89184]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:42 compute-0 sudo[89262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cibxzwjoutddnkxokmffqkfeynezzvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623141.124686-128-182754220564887/AnsiballZ_file.py'
Jan 05 14:25:42 compute-0 sudo[89262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:42 compute-0 python3.9[89264]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:42 compute-0 sudo[89262]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:43 compute-0 sshd-session[89265]: Received disconnect from 193.46.255.7 port 51744:11:  [preauth]
Jan 05 14:25:43 compute-0 sshd-session[89265]: Disconnected from authenticating user root 193.46.255.7 port 51744 [preauth]
Jan 05 14:25:43 compute-0 sudo[89416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xisqukpheyzzobhblrqxkddtnyytdimz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623142.6166925-141-202562050329753/AnsiballZ_command.py'
Jan 05 14:25:43 compute-0 sudo[89416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:43 compute-0 python3.9[89418]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:25:43 compute-0 sudo[89416]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:44 compute-0 sudo[89569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lneagfpmgwzijojbuxbdvnmfzuepjugg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623143.6679225-149-274323497609411/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 14:25:44 compute-0 sudo[89569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:44 compute-0 python3[89571]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 14:25:44 compute-0 sudo[89569]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:45 compute-0 sudo[89721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxmtcajboqfwrupbnjnqcyvaivocypel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623144.6235929-157-105284753299724/AnsiballZ_stat.py'
Jan 05 14:25:45 compute-0 sudo[89721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:45 compute-0 python3.9[89723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:45 compute-0 sudo[89721]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:45 compute-0 sudo[89846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cslmoyoshlytpqrcrvhaipxqufvuihpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623144.6235929-157-105284753299724/AnsiballZ_copy.py'
Jan 05 14:25:45 compute-0 sudo[89846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:46 compute-0 python3.9[89848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623144.6235929-157-105284753299724/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:46 compute-0 sudo[89846]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:46 compute-0 sudo[89998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eliczyvytgkvaizwocvtyeoadfanpuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623146.2664177-172-205581859079853/AnsiballZ_stat.py'
Jan 05 14:25:46 compute-0 sudo[89998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:46 compute-0 python3.9[90000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:46 compute-0 sudo[89998]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:47 compute-0 sudo[90123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbolunmjmgkrctmudffwrvavwndhmnxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623146.2664177-172-205581859079853/AnsiballZ_copy.py'
Jan 05 14:25:47 compute-0 sudo[90123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:47 compute-0 python3.9[90125]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623146.2664177-172-205581859079853/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:47 compute-0 sudo[90123]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:48 compute-0 sudo[90275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsytgetwrhlkvygstpmmzaukwdnbgkmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623147.6723802-187-225165457693321/AnsiballZ_stat.py'
Jan 05 14:25:48 compute-0 sudo[90275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:48 compute-0 python3.9[90277]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:48 compute-0 sudo[90275]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:48 compute-0 sudo[90402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdincffewyfeocbhiqlphixhihqlkfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623147.6723802-187-225165457693321/AnsiballZ_copy.py'
Jan 05 14:25:48 compute-0 sudo[90402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:48 compute-0 sshd-session[90278]: Received disconnect from 193.46.255.217 port 49272:11:  [preauth]
Jan 05 14:25:48 compute-0 sshd-session[90278]: Disconnected from authenticating user root 193.46.255.217 port 49272 [preauth]
Jan 05 14:25:48 compute-0 python3.9[90404]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623147.6723802-187-225165457693321/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:48 compute-0 sudo[90402]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:49 compute-0 sudo[90554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcwlkkgoyiscuahqzpurcgxbqvtoevn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623149.1316469-202-32639715202695/AnsiballZ_stat.py'
Jan 05 14:25:49 compute-0 sudo[90554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:49 compute-0 python3.9[90556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:49 compute-0 sudo[90554]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:50 compute-0 sudo[90679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jovkqdqtvbyxvjpzjabnfhguvdxiklie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623149.1316469-202-32639715202695/AnsiballZ_copy.py'
Jan 05 14:25:50 compute-0 sudo[90679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:50 compute-0 python3.9[90681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623149.1316469-202-32639715202695/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:50 compute-0 sudo[90679]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:51 compute-0 sudo[90831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emujjwhaawmrwoblycbgrtqvdmrftjzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623150.6838434-217-91195515901481/AnsiballZ_stat.py'
Jan 05 14:25:51 compute-0 sudo[90831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:51 compute-0 python3.9[90833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:25:51 compute-0 chronyd[65581]: Selected source 23.133.168.247 (pool.ntp.org)
Jan 05 14:25:51 compute-0 sudo[90831]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:51 compute-0 sudo[90956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxrynpdatshdntqsrupuqtigdmaaohk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623150.6838434-217-91195515901481/AnsiballZ_copy.py'
Jan 05 14:25:51 compute-0 sudo[90956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:52 compute-0 python3.9[90958]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623150.6838434-217-91195515901481/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:52 compute-0 sudo[90956]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:52 compute-0 sudo[91109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvswsxkqbjwtnpmwrcpautidjtsjzwcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623152.3070543-232-250959555671181/AnsiballZ_file.py'
Jan 05 14:25:52 compute-0 sudo[91109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:52 compute-0 python3.9[91111]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:52 compute-0 sudo[91109]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:53 compute-0 sudo[91261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaziuxbhieqewafqczynobhynsykqjzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623153.1927645-240-272557731169655/AnsiballZ_command.py'
Jan 05 14:25:53 compute-0 sudo[91261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:53 compute-0 python3.9[91263]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:25:53 compute-0 sudo[91261]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:54 compute-0 sudo[91416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfmzrvdrswretewigtzckfzcadekrdye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623154.0831482-248-93699082359705/AnsiballZ_blockinfile.py'
Jan 05 14:25:54 compute-0 sudo[91416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:54 compute-0 python3.9[91418]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:54 compute-0 sudo[91416]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:55 compute-0 sudo[91568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zroclqugimojhtdwilazagrqyttebsal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623155.0715284-257-189697567914021/AnsiballZ_command.py'
Jan 05 14:25:55 compute-0 sudo[91568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:55 compute-0 python3.9[91570]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:25:55 compute-0 sudo[91568]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:56 compute-0 sudo[91721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxuzjpdxzfsuismiiwudpdvhvrgakjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623155.8532705-265-184036000955799/AnsiballZ_stat.py'
Jan 05 14:25:56 compute-0 sudo[91721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:56 compute-0 python3.9[91723]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:25:56 compute-0 sudo[91721]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:57 compute-0 sudo[91875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvatyjgohgypwusjiaezxzwguwtxjyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623156.6717014-273-169580512067094/AnsiballZ_command.py'
Jan 05 14:25:57 compute-0 sudo[91875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:57 compute-0 python3.9[91877]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:25:57 compute-0 sudo[91875]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:57 compute-0 sudo[92030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttoiefxjkwhlwzytvgpiaatfnkmakpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623157.5184684-281-138713853396676/AnsiballZ_file.py'
Jan 05 14:25:57 compute-0 sudo[92030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:25:58 compute-0 python3.9[92032]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:25:58 compute-0 sudo[92030]: pam_unix(sudo:session): session closed for user root
Jan 05 14:25:59 compute-0 python3.9[92182]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:26:00 compute-0 sudo[92333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlifeqfsbuhsplluzncedztrkraxdfys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623160.0462248-321-83831650221313/AnsiballZ_command.py'
Jan 05 14:26:00 compute-0 sudo[92333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:00 compute-0 python3.9[92335]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:86:5c:f9:a2" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:00 compute-0 ovs-vsctl[92336]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:86:5c:f9:a2 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 05 14:26:00 compute-0 sudo[92333]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:01 compute-0 sudo[92486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpvrlynldcyacjeryxdudsmbqwoyfjdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623160.8604436-330-9928667211078/AnsiballZ_command.py'
Jan 05 14:26:01 compute-0 sudo[92486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:01 compute-0 python3.9[92488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:01 compute-0 sudo[92486]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:02 compute-0 sudo[92641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvahsbmsmevnojxypykxpgbjfbkhyoyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623161.7186396-338-91980715778309/AnsiballZ_command.py'
Jan 05 14:26:02 compute-0 sudo[92641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:02 compute-0 python3.9[92643]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:02 compute-0 ovs-vsctl[92644]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 05 14:26:02 compute-0 sudo[92641]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:03 compute-0 python3.9[92794]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:26:03 compute-0 sudo[92946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puypeovffhmdgtjztirnnbypyotveajq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623163.3242903-355-89599704271327/AnsiballZ_file.py'
Jan 05 14:26:03 compute-0 sudo[92946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:03 compute-0 python3.9[92948]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:03 compute-0 sudo[92946]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:04 compute-0 sudo[93098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgbbqvvbtkfdaumubltjsnbphtbnsrre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623164.0768664-363-93383436393605/AnsiballZ_stat.py'
Jan 05 14:26:04 compute-0 sudo[93098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:04 compute-0 python3.9[93100]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:04 compute-0 sudo[93098]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:04 compute-0 sudo[93176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzofbgctfiktxkljmamcbxntupkmfotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623164.0768664-363-93383436393605/AnsiballZ_file.py'
Jan 05 14:26:04 compute-0 sudo[93176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:05 compute-0 python3.9[93178]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:05 compute-0 sudo[93176]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:05 compute-0 sudo[93328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkaimznzblfgzfqlhwuppqmyyrmzchon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623165.3617089-363-11760541363483/AnsiballZ_stat.py'
Jan 05 14:26:05 compute-0 sudo[93328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:05 compute-0 python3.9[93330]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:07 compute-0 sudo[93328]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:07 compute-0 sudo[93406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsekareoubzcflfwmuismgorzbehulua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623165.3617089-363-11760541363483/AnsiballZ_file.py'
Jan 05 14:26:07 compute-0 sudo[93406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:07 compute-0 python3.9[93408]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:07 compute-0 sudo[93406]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:08 compute-0 sudo[93558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdqyykaqlakpahiyhpnwjxspruzgwej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623167.8218608-386-13072345023563/AnsiballZ_file.py'
Jan 05 14:26:08 compute-0 sudo[93558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:08 compute-0 python3.9[93560]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:08 compute-0 sudo[93558]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:09 compute-0 sudo[93710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvdttscqxfcjcmpxqbeultczifoqmgmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623168.6702664-394-279220266003806/AnsiballZ_stat.py'
Jan 05 14:26:09 compute-0 sudo[93710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:09 compute-0 python3.9[93712]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:09 compute-0 sudo[93710]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:09 compute-0 sudo[93788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrmutfdmfbpivkgbczewgzxenzwfmpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623168.6702664-394-279220266003806/AnsiballZ_file.py'
Jan 05 14:26:09 compute-0 sudo[93788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:09 compute-0 python3.9[93790]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:09 compute-0 sudo[93788]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:10 compute-0 sudo[93940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwrhbrmwpvzbsrhgyzgsvglhosfyohr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623170.0566363-406-89043243079915/AnsiballZ_stat.py'
Jan 05 14:26:10 compute-0 sudo[93940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:10 compute-0 python3.9[93942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:10 compute-0 sudo[93940]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:10 compute-0 sudo[94018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djbnhglkbddyattwqffqeyogofuqwkoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623170.0566363-406-89043243079915/AnsiballZ_file.py'
Jan 05 14:26:10 compute-0 sudo[94018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:11 compute-0 python3.9[94020]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:11 compute-0 sudo[94018]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:11 compute-0 sudo[94170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxgfljqpmibykwzvfoefhahqybymmov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623171.4549491-418-168282756328723/AnsiballZ_systemd.py'
Jan 05 14:26:11 compute-0 sudo[94170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:12 compute-0 python3.9[94172]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:26:12 compute-0 systemd[1]: Reloading.
Jan 05 14:26:12 compute-0 systemd-rc-local-generator[94203]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:26:12 compute-0 systemd-sysv-generator[94207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:26:12 compute-0 sudo[94170]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:13 compute-0 sudo[94360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkyjpzecpiyapxmuftcdadoydnqkcndq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623172.6182199-426-144650361856943/AnsiballZ_stat.py'
Jan 05 14:26:13 compute-0 sudo[94360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:13 compute-0 python3.9[94362]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:13 compute-0 sudo[94360]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:13 compute-0 sudo[94438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bamiaxcxyflequklsmvkabgowaetgxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623172.6182199-426-144650361856943/AnsiballZ_file.py'
Jan 05 14:26:13 compute-0 sudo[94438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:13 compute-0 python3.9[94440]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:13 compute-0 sudo[94438]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:14 compute-0 sudo[94590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocinbidwpilgwwnvuiuuzdeehbxmkjlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623173.9858358-438-78002736024758/AnsiballZ_stat.py'
Jan 05 14:26:14 compute-0 sudo[94590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:14 compute-0 python3.9[94592]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:14 compute-0 sudo[94590]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:14 compute-0 sudo[94668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnxaxehakhvpsytolbixduprvddoezp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623173.9858358-438-78002736024758/AnsiballZ_file.py'
Jan 05 14:26:14 compute-0 sudo[94668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:15 compute-0 python3.9[94670]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:15 compute-0 sudo[94668]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:15 compute-0 sudo[94820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnunmvkyarejejmkkwyujgxxitkmkzod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623175.327636-450-49683033614854/AnsiballZ_systemd.py'
Jan 05 14:26:15 compute-0 sudo[94820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:15 compute-0 python3.9[94822]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:26:16 compute-0 systemd[1]: Reloading.
Jan 05 14:26:16 compute-0 systemd-rc-local-generator[94847]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:26:16 compute-0 systemd-sysv-generator[94851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:26:16 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 14:26:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 14:26:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 14:26:16 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 14:26:16 compute-0 sudo[94820]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:16 compute-0 sudo[95013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjozaicjehswksxvvecfuxaczqkvjht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623176.6891565-460-51698423119625/AnsiballZ_file.py'
Jan 05 14:26:16 compute-0 sudo[95013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:17 compute-0 python3.9[95015]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:17 compute-0 sudo[95013]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:17 compute-0 sudo[95165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuamieyvdlosfayicvtwpdtaalrpggmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623177.4052293-468-236990323239354/AnsiballZ_stat.py'
Jan 05 14:26:17 compute-0 sudo[95165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:17 compute-0 python3.9[95167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:17 compute-0 sudo[95165]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:18 compute-0 sudo[95288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izecbmidicrrvmifdyhmkrucrrqksdkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623177.4052293-468-236990323239354/AnsiballZ_copy.py'
Jan 05 14:26:18 compute-0 sudo[95288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:18 compute-0 python3.9[95290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623177.4052293-468-236990323239354/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:18 compute-0 sudo[95288]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:19 compute-0 sudo[95440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjsmysmkkovdildvfyhenjsamdphpkbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623179.192704-485-259399760811195/AnsiballZ_file.py'
Jan 05 14:26:19 compute-0 sudo[95440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:19 compute-0 python3.9[95442]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:19 compute-0 sudo[95440]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:20 compute-0 sudo[95592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxgikcogjydmrbxkrkkpupaoxuzclvzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623180.001228-493-133453202819985/AnsiballZ_file.py'
Jan 05 14:26:20 compute-0 sudo[95592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:20 compute-0 python3.9[95594]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:20 compute-0 sudo[95592]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:21 compute-0 sudo[95744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yritmzbyovusyuziydterksjicueqmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623180.8382146-501-248949111289347/AnsiballZ_stat.py'
Jan 05 14:26:21 compute-0 sudo[95744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:21 compute-0 python3.9[95746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:21 compute-0 sudo[95744]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:21 compute-0 sudo[95867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvvpydbnnzljijlwxbnjowvqepqpmxsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623180.8382146-501-248949111289347/AnsiballZ_copy.py'
Jan 05 14:26:21 compute-0 sudo[95867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:22 compute-0 python3.9[95869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623180.8382146-501-248949111289347/.source.json _original_basename=.wc9ue7kh follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:22 compute-0 sudo[95867]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:22 compute-0 python3.9[96019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:25 compute-0 sudo[96440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccyiwngegcrgqdweoldncicupunleoqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623184.971581-541-43531010552621/AnsiballZ_container_config_data.py'
Jan 05 14:26:25 compute-0 sudo[96440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:25 compute-0 python3.9[96442]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 05 14:26:25 compute-0 sudo[96440]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:26 compute-0 sudo[96592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siaadvbloqjheoniwnamzhovcdnmclvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623186.1345365-552-217910449997574/AnsiballZ_container_config_hash.py'
Jan 05 14:26:26 compute-0 sudo[96592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:26 compute-0 python3.9[96594]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:26:26 compute-0 sudo[96592]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:27 compute-0 sudo[96744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nllqpqzhrosccpoyfkjzltrnloyetxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623187.2099774-561-207766709514480/AnsiballZ_podman_container_info.py'
Jan 05 14:26:27 compute-0 sudo[96744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:27 compute-0 python3.9[96746]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:26:28 compute-0 sudo[96744]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:29 compute-0 sudo[96908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enddmxnqcldjywbmlcojugxulvdanfkn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623188.6921248-574-141951991272989/AnsiballZ_edpm_container_manage.py'
Jan 05 14:26:29 compute-0 sudo[96908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:29 compute-0 python3[96910]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:26:29 compute-0 podman[96947]: 2026-01-05 14:26:29.861828007 +0000 UTC m=+0.080914433 container create eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 14:26:29 compute-0 podman[96947]: 2026-01-05 14:26:29.805845204 +0000 UTC m=+0.024931680 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 14:26:29 compute-0 python3[96910]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 14:26:29 compute-0 sudo[96908]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:30 compute-0 sudo[97136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mecpmkgsafmcmfkclgnnlttigwobpzoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623190.2409456-582-147299813308598/AnsiballZ_stat.py'
Jan 05 14:26:30 compute-0 sudo[97136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 14:26:30 compute-0 python3.9[97138]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:26:30 compute-0 sudo[97136]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:31 compute-0 sudo[97290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmljsqbzzfjnqhlvmeouxryxyomjxcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623191.237326-591-230721326475481/AnsiballZ_file.py'
Jan 05 14:26:31 compute-0 sudo[97290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:31 compute-0 python3.9[97292]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:31 compute-0 sudo[97290]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:32 compute-0 sudo[97366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmnxbaxuptvgabcuyqogofezzilokig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623191.237326-591-230721326475481/AnsiballZ_stat.py'
Jan 05 14:26:32 compute-0 sudo[97366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:32 compute-0 python3.9[97368]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:26:32 compute-0 sudo[97366]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:33 compute-0 sudo[97517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshqhbvotaohqgbpnzxqqqavwlspedcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623192.5257843-591-238719148408641/AnsiballZ_copy.py'
Jan 05 14:26:33 compute-0 sudo[97517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:33 compute-0 python3.9[97519]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623192.5257843-591-238719148408641/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:33 compute-0 sudo[97517]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:33 compute-0 sudo[97593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcqdkahgvacabnlmvjiyxnvblghqeyot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623192.5257843-591-238719148408641/AnsiballZ_systemd.py'
Jan 05 14:26:33 compute-0 sudo[97593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:33 compute-0 python3.9[97595]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:26:34 compute-0 systemd[1]: Reloading.
Jan 05 14:26:34 compute-0 systemd-rc-local-generator[97617]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:26:34 compute-0 systemd-sysv-generator[97622]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:26:34 compute-0 sudo[97593]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:34 compute-0 sudo[97704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kstotvoazwdyberzqedevyqtcikselvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623192.5257843-591-238719148408641/AnsiballZ_systemd.py'
Jan 05 14:26:34 compute-0 sudo[97704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:34 compute-0 python3.9[97706]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:26:34 compute-0 systemd[1]: Reloading.
Jan 05 14:26:35 compute-0 systemd-rc-local-generator[97736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:26:35 compute-0 systemd-sysv-generator[97739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:26:35 compute-0 systemd[1]: Starting ovn_controller container...
Jan 05 14:26:35 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 05 14:26:35 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b00de88c61a015024c07ba8fafd4690de05f708f3426054e31501147873cb231/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 05 14:26:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.
Jan 05 14:26:35 compute-0 podman[97747]: 2026-01-05 14:26:35.418148493 +0000 UTC m=+0.177621205 container init eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 05 14:26:35 compute-0 ovn_controller[97763]: + sudo -E kolla_set_configs
Jan 05 14:26:35 compute-0 podman[97747]: 2026-01-05 14:26:35.456938438 +0000 UTC m=+0.216411110 container start eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 14:26:35 compute-0 edpm-start-podman-container[97747]: ovn_controller
Jan 05 14:26:35 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 05 14:26:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 05 14:26:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 05 14:26:35 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 05 14:26:35 compute-0 edpm-start-podman-container[97746]: Creating additional drop-in dependency for "ovn_controller" (eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c)
Jan 05 14:26:35 compute-0 systemd[97803]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 05 14:26:35 compute-0 systemd[1]: Reloading.
Jan 05 14:26:35 compute-0 podman[97770]: 2026-01-05 14:26:35.591122361 +0000 UTC m=+0.114790556 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 05 14:26:35 compute-0 systemd-sysv-generator[97856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:26:35 compute-0 systemd-rc-local-generator[97851]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:26:35 compute-0 systemd[97803]: Queued start job for default target Main User Target.
Jan 05 14:26:35 compute-0 systemd[97803]: Created slice User Application Slice.
Jan 05 14:26:35 compute-0 systemd[97803]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 05 14:26:35 compute-0 systemd[97803]: Started Daily Cleanup of User's Temporary Directories.
Jan 05 14:26:35 compute-0 systemd[97803]: Reached target Paths.
Jan 05 14:26:35 compute-0 systemd[97803]: Reached target Timers.
Jan 05 14:26:35 compute-0 systemd[97803]: Starting D-Bus User Message Bus Socket...
Jan 05 14:26:35 compute-0 systemd[97803]: Starting Create User's Volatile Files and Directories...
Jan 05 14:26:35 compute-0 systemd[97803]: Finished Create User's Volatile Files and Directories.
Jan 05 14:26:35 compute-0 systemd[97803]: Listening on D-Bus User Message Bus Socket.
Jan 05 14:26:35 compute-0 systemd[97803]: Reached target Sockets.
Jan 05 14:26:35 compute-0 systemd[97803]: Reached target Basic System.
Jan 05 14:26:35 compute-0 systemd[97803]: Reached target Main User Target.
Jan 05 14:26:35 compute-0 systemd[97803]: Startup finished in 156ms.
Jan 05 14:26:35 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 05 14:26:35 compute-0 systemd[1]: Started ovn_controller container.
Jan 05 14:26:35 compute-0 systemd[1]: eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c-4c0e178f2fa0135c.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:26:35 compute-0 systemd[1]: eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c-4c0e178f2fa0135c.service: Failed with result 'exit-code'.
Jan 05 14:26:35 compute-0 systemd[1]: Started Session c1 of User root.
Jan 05 14:26:35 compute-0 sudo[97704]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:35 compute-0 ovn_controller[97763]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:26:35 compute-0 ovn_controller[97763]: INFO:__main__:Validating config file
Jan 05 14:26:35 compute-0 ovn_controller[97763]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:26:35 compute-0 ovn_controller[97763]: INFO:__main__:Writing out command to execute
Jan 05 14:26:35 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 05 14:26:35 compute-0 ovn_controller[97763]: ++ cat /run_command
Jan 05 14:26:35 compute-0 ovn_controller[97763]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 05 14:26:35 compute-0 ovn_controller[97763]: + ARGS=
Jan 05 14:26:35 compute-0 ovn_controller[97763]: + sudo kolla_copy_cacerts
Jan 05 14:26:35 compute-0 systemd[1]: Started Session c2 of User root.
Jan 05 14:26:36 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 05 14:26:36 compute-0 ovn_controller[97763]: + [[ ! -n '' ]]
Jan 05 14:26:36 compute-0 ovn_controller[97763]: + . kolla_extend_start
Jan 05 14:26:36 compute-0 ovn_controller[97763]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 05 14:26:36 compute-0 ovn_controller[97763]: + umask 0022
Jan 05 14:26:36 compute-0 ovn_controller[97763]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.0907] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.0919] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <warn>  [1767623196.0923] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.0934] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.0943] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.0947] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 14:26:36 compute-0 kernel: br-int: entered promiscuous mode
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 05 14:26:36 compute-0 systemd-udevd[97913]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 14:26:36 compute-0 ovn_controller[97763]: 2026-01-05T14:26:36Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.2539] manager: (ovn-e3ef8d-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 05 14:26:36 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.3054] device (genev_sys_6081): carrier: link connected
Jan 05 14:26:36 compute-0 NetworkManager[56139]: <info>  [1767623196.3060] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 05 14:26:36 compute-0 python3.9[98030]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:26:37 compute-0 sudo[98180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqvskqwytzzfjjilpspcwpkfbfabyxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623197.3326366-632-221218250614876/AnsiballZ_stat.py'
Jan 05 14:26:37 compute-0 sudo[98180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:37 compute-0 python3.9[98182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:37 compute-0 sudo[98180]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:38 compute-0 sudo[98303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvsznuldlooxhtrxfzsqisvgyfnyujqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623197.3326366-632-221218250614876/AnsiballZ_copy.py'
Jan 05 14:26:38 compute-0 sudo[98303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:38 compute-0 python3.9[98305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623197.3326366-632-221218250614876/.source.yaml _original_basename=.xnb9y430 follow=False checksum=a089bef9cf3a855b098d7ae45e63e2c82319a4a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:26:38 compute-0 sudo[98303]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:39 compute-0 sudo[98455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumlcancntqwrgkuzlnbcqwxbhaobkuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623198.9025135-647-105197238459128/AnsiballZ_command.py'
Jan 05 14:26:39 compute-0 sudo[98455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:39 compute-0 python3.9[98457]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:39 compute-0 ovs-vsctl[98458]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 05 14:26:39 compute-0 sudo[98455]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:40 compute-0 sudo[98608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfvcyfxyejixzjggshybovaamhhnxfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623199.736717-655-171165828546380/AnsiballZ_command.py'
Jan 05 14:26:40 compute-0 sudo[98608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:40 compute-0 python3.9[98610]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:40 compute-0 ovs-vsctl[98612]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 05 14:26:40 compute-0 sudo[98608]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:41 compute-0 sudo[98763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrowsdwiojytxvqjehhkrbbqtkcedjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623200.8016229-669-198695532397520/AnsiballZ_command.py'
Jan 05 14:26:41 compute-0 sudo[98763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:41 compute-0 python3.9[98765]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:26:41 compute-0 ovs-vsctl[98766]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 05 14:26:41 compute-0 sudo[98763]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:41 compute-0 sshd-session[87114]: Connection closed by 192.168.122.30 port 60520
Jan 05 14:26:41 compute-0 sshd-session[87111]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:26:41 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Jan 05 14:26:41 compute-0 systemd[1]: session-20.scope: Consumed 57.548s CPU time.
Jan 05 14:26:41 compute-0 systemd-logind[795]: Session 20 logged out. Waiting for processes to exit.
Jan 05 14:26:41 compute-0 systemd-logind[795]: Removed session 20.
Jan 05 14:26:46 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 05 14:26:46 compute-0 systemd[97803]: Activating special unit Exit the Session...
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped target Main User Target.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped target Basic System.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped target Paths.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped target Sockets.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped target Timers.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 05 14:26:46 compute-0 systemd[97803]: Closed D-Bus User Message Bus Socket.
Jan 05 14:26:46 compute-0 systemd[97803]: Stopped Create User's Volatile Files and Directories.
Jan 05 14:26:46 compute-0 systemd[97803]: Removed slice User Application Slice.
Jan 05 14:26:46 compute-0 systemd[97803]: Reached target Shutdown.
Jan 05 14:26:46 compute-0 systemd[97803]: Finished Exit the Session.
Jan 05 14:26:46 compute-0 systemd[97803]: Reached target Exit the Session.
Jan 05 14:26:46 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 05 14:26:46 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 05 14:26:46 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 05 14:26:46 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 05 14:26:46 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 05 14:26:46 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 05 14:26:46 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 05 14:26:47 compute-0 sshd-session[98793]: Accepted publickey for zuul from 192.168.122.30 port 48462 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:26:47 compute-0 systemd-logind[795]: New session 22 of user zuul.
Jan 05 14:26:47 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 05 14:26:47 compute-0 sshd-session[98793]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:26:48 compute-0 python3.9[98946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:26:49 compute-0 sudo[99100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgjcofresidaonzeymligorhcfsnomy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623209.1663716-34-162770753880697/AnsiballZ_file.py'
Jan 05 14:26:49 compute-0 sudo[99100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:49 compute-0 python3.9[99102]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:49 compute-0 sudo[99100]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:50 compute-0 sudo[99252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqhasisuptqxxpcxnkffatixhcndzzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623210.1042378-34-238810712720791/AnsiballZ_file.py'
Jan 05 14:26:50 compute-0 sudo[99252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:50 compute-0 python3.9[99254]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:50 compute-0 sudo[99252]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:51 compute-0 sudo[99404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwcgwqqowxnuliljrlyjiddowhtrvkoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623210.9323933-34-253929650196748/AnsiballZ_file.py'
Jan 05 14:26:51 compute-0 sudo[99404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:51 compute-0 python3.9[99406]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:51 compute-0 sudo[99404]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:52 compute-0 sudo[99556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhjabxpmajsrhdmvepwhmpezkzdbnwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623211.7480528-34-149889292693000/AnsiballZ_file.py'
Jan 05 14:26:52 compute-0 sudo[99556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:52 compute-0 python3.9[99558]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:52 compute-0 sudo[99556]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:52 compute-0 sudo[99708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqyetdwhxikdcwbymxnhbghtphiiqpbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623212.582532-34-62044113658903/AnsiballZ_file.py'
Jan 05 14:26:52 compute-0 sudo[99708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:53 compute-0 python3.9[99710]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:53 compute-0 sudo[99708]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:54 compute-0 python3.9[99860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:26:54 compute-0 sudo[100011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahcjutqnpfgkbqgnomiknfqtztnhvcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623214.3800235-78-510804635518/AnsiballZ_seboolean.py'
Jan 05 14:26:54 compute-0 sudo[100011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:26:55 compute-0 python3.9[100013]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 05 14:26:55 compute-0 sudo[100011]: pam_unix(sudo:session): session closed for user root
Jan 05 14:26:56 compute-0 python3.9[100163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:57 compute-0 python3.9[100284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623215.9156644-86-229504350667439/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:58 compute-0 python3.9[100434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:26:58 compute-0 python3.9[100555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623217.8397753-101-198628251809590/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:26:59 compute-0 sudo[100705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xltsvzakxairwvnffxamxuubsjpyssup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623219.2971528-118-193233096547653/AnsiballZ_setup.py'
Jan 05 14:26:59 compute-0 sudo[100705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:00 compute-0 python3.9[100707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:27:00 compute-0 sudo[100705]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:00 compute-0 sudo[100789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxstkjldtwcwxaamyoruaiigllolilao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623219.2971528-118-193233096547653/AnsiballZ_dnf.py'
Jan 05 14:27:00 compute-0 sudo[100789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:01 compute-0 python3.9[100791]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:27:02 compute-0 sudo[100789]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:03 compute-0 sudo[100942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmucrxzyekosoagdkfljdugdjkydkzim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623222.6787121-130-35279979789187/AnsiballZ_systemd.py'
Jan 05 14:27:03 compute-0 sudo[100942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:03 compute-0 python3.9[100944]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:27:03 compute-0 sudo[100942]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:04 compute-0 python3.9[101097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:05 compute-0 python3.9[101218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623224.1582353-138-14224965266883/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:06 compute-0 ovn_controller[97763]: 2026-01-05T14:27:06Z|00025|memory|INFO|16384 kB peak resident set size after 30.0 seconds
Jan 05 14:27:06 compute-0 ovn_controller[97763]: 2026-01-05T14:27:06Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 05 14:27:06 compute-0 podman[101342]: 2026-01-05 14:27:06.140440279 +0000 UTC m=+0.188476344 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 14:27:06 compute-0 python3.9[101379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:06 compute-0 python3.9[101515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623225.627429-138-25346854718609/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:08 compute-0 python3.9[101665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:08 compute-0 python3.9[101786]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623227.6271093-182-40906149192392/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:09 compute-0 python3.9[101936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:10 compute-0 python3.9[102057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623229.0741634-182-75172908803311/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:10 compute-0 python3.9[102207]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:27:11 compute-0 sudo[102359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvahwjfgnacmgtvwmvujfymgslyhwkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623231.293049-220-28454833005577/AnsiballZ_file.py'
Jan 05 14:27:11 compute-0 sudo[102359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:11 compute-0 python3.9[102361]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:11 compute-0 sudo[102359]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:12 compute-0 sudo[102511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslknagxxckuxkzhalflgjuqginchgao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623232.1088662-228-252280748318083/AnsiballZ_stat.py'
Jan 05 14:27:12 compute-0 sudo[102511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:12 compute-0 python3.9[102513]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:12 compute-0 sudo[102511]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:13 compute-0 sudo[102589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywsklfawyuiilbsimsytdbsyulgxovkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623232.1088662-228-252280748318083/AnsiballZ_file.py'
Jan 05 14:27:13 compute-0 sudo[102589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:13 compute-0 python3.9[102591]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:13 compute-0 sudo[102589]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:13 compute-0 sudo[102741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmiydllbpvnqwybekqplkkeyiijuptah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623233.4322398-228-67879691434830/AnsiballZ_stat.py'
Jan 05 14:27:13 compute-0 sudo[102741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:14 compute-0 python3.9[102743]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:14 compute-0 sudo[102741]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:14 compute-0 sudo[102819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhimukkhejraczefzwqvrwzdaanhxoql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623233.4322398-228-67879691434830/AnsiballZ_file.py'
Jan 05 14:27:14 compute-0 sudo[102819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:14 compute-0 python3.9[102821]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:14 compute-0 sudo[102819]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:15 compute-0 sudo[102971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czdlzrbclnvraksfvszerxtysvrtdxat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623234.8683066-251-210397823761904/AnsiballZ_file.py'
Jan 05 14:27:15 compute-0 sudo[102971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:15 compute-0 python3.9[102973]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:15 compute-0 sudo[102971]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:16 compute-0 sudo[103123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylbyyimslglxgsdhzfwosjgkjmlupqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623235.7682316-259-63018017596070/AnsiballZ_stat.py'
Jan 05 14:27:16 compute-0 sudo[103123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:16 compute-0 python3.9[103125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:16 compute-0 sudo[103123]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:16 compute-0 sudo[103201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwipzzgqhljuvrmxbrvsbyvicmvauru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623235.7682316-259-63018017596070/AnsiballZ_file.py'
Jan 05 14:27:16 compute-0 sudo[103201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:16 compute-0 python3.9[103203]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:16 compute-0 sudo[103201]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:17 compute-0 sudo[103353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcgrvuzgphujdmvwbbpzbjoqfwzkios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623237.085054-271-174425881271591/AnsiballZ_stat.py'
Jan 05 14:27:17 compute-0 sudo[103353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:17 compute-0 python3.9[103355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:17 compute-0 sudo[103353]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:17 compute-0 sudo[103431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwounetorqraymcnvcvzfmyiddpnutlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623237.085054-271-174425881271591/AnsiballZ_file.py'
Jan 05 14:27:17 compute-0 sudo[103431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:18 compute-0 python3.9[103433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:18 compute-0 sudo[103431]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:18 compute-0 sudo[103583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nefkleepnvksftspdjkapbgchlcptgvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623238.522146-283-161645279626207/AnsiballZ_systemd.py'
Jan 05 14:27:18 compute-0 sudo[103583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:19 compute-0 python3.9[103585]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:27:19 compute-0 systemd[1]: Reloading.
Jan 05 14:27:19 compute-0 systemd-rc-local-generator[103611]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:19 compute-0 systemd-sysv-generator[103616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:19 compute-0 sudo[103583]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:20 compute-0 sudo[103772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjhpakakissixygwxpnmugajnyukzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623239.7118804-291-18068462054957/AnsiballZ_stat.py'
Jan 05 14:27:20 compute-0 sudo[103772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:20 compute-0 python3.9[103774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:20 compute-0 sudo[103772]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:20 compute-0 sudo[103850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwtbfgffitxuqhdbkakbtmyzeumeobvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623239.7118804-291-18068462054957/AnsiballZ_file.py'
Jan 05 14:27:20 compute-0 sudo[103850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:20 compute-0 python3.9[103852]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:20 compute-0 sudo[103850]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:21 compute-0 sudo[104002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debtmniatzgqirqwbkzpzlkpngdshjqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623241.1602814-303-248911283507516/AnsiballZ_stat.py'
Jan 05 14:27:21 compute-0 sudo[104002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:21 compute-0 python3.9[104004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:21 compute-0 sudo[104002]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:22 compute-0 sudo[104080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndngmnzisqwnmnmlyzqzuxzacclsskiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623241.1602814-303-248911283507516/AnsiballZ_file.py'
Jan 05 14:27:22 compute-0 sudo[104080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:22 compute-0 python3.9[104082]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:22 compute-0 sudo[104080]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:22 compute-0 sudo[104232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbhqhksdservjivksmoyvfgkejewbxgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623242.4419866-315-70106246517672/AnsiballZ_systemd.py'
Jan 05 14:27:22 compute-0 sudo[104232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:23 compute-0 python3.9[104234]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:27:23 compute-0 systemd[1]: Reloading.
Jan 05 14:27:23 compute-0 systemd-sysv-generator[104262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:23 compute-0 systemd-rc-local-generator[104257]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:23 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 14:27:23 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 14:27:23 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 14:27:23 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 14:27:23 compute-0 sudo[104232]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:24 compute-0 sudo[104426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgxihcsegjujuctwnmljapfcolxadyyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623243.7726145-325-279630441743001/AnsiballZ_file.py'
Jan 05 14:27:24 compute-0 sudo[104426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:24 compute-0 python3.9[104430]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:24 compute-0 sudo[104426]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:24 compute-0 sshd-session[104427]: Invalid user solv from 165.22.168.95 port 33380
Jan 05 14:27:24 compute-0 sshd-session[104427]: Connection closed by invalid user solv 165.22.168.95 port 33380 [preauth]
Jan 05 14:27:24 compute-0 sudo[104580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnbgpsdaqeklaaykqzjwtdeqtkhnjgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623244.554392-333-58894052036023/AnsiballZ_stat.py'
Jan 05 14:27:24 compute-0 sudo[104580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:25 compute-0 python3.9[104582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:25 compute-0 sudo[104580]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:25 compute-0 sudo[104703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emwaalcqrskcrzjxzvulgkxqnzjxackt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623244.554392-333-58894052036023/AnsiballZ_copy.py'
Jan 05 14:27:25 compute-0 sudo[104703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:26 compute-0 python3.9[104705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623244.554392-333-58894052036023/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:26 compute-0 sudo[104703]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:26 compute-0 sudo[104855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-argvxufjsuflmzyjceqnmdfnyzroelzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623246.4036713-350-269259593011429/AnsiballZ_file.py'
Jan 05 14:27:26 compute-0 sudo[104855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:27 compute-0 python3.9[104857]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:27 compute-0 sudo[104855]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:27 compute-0 sudo[105007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwxbkoaszxzbjovqzwlbctujogbqvsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623247.2752693-358-186278925010591/AnsiballZ_file.py'
Jan 05 14:27:27 compute-0 sudo[105007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:27 compute-0 python3.9[105009]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:27:27 compute-0 sudo[105007]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:28 compute-0 sudo[105159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcnkyufpudoamebgdzbxzgmnhyrkzxjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623248.1861768-366-264758484898126/AnsiballZ_stat.py'
Jan 05 14:27:28 compute-0 sudo[105159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:28 compute-0 python3.9[105161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:28 compute-0 sudo[105159]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:29 compute-0 sudo[105282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjgmkoqtdwkmexdvciizuoolpssijpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623248.1861768-366-264758484898126/AnsiballZ_copy.py'
Jan 05 14:27:29 compute-0 sudo[105282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:29 compute-0 python3.9[105284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623248.1861768-366-264758484898126/.source.json _original_basename=.s32802l7 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:29 compute-0 sudo[105282]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:30 compute-0 python3.9[105434]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:32 compute-0 sudo[105855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfjjuxdhcbevszsguinqhxjohkimowch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623252.3725836-406-268069850545631/AnsiballZ_container_config_data.py'
Jan 05 14:27:32 compute-0 sudo[105855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:33 compute-0 python3.9[105857]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 05 14:27:33 compute-0 sudo[105855]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:34 compute-0 sudo[106007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stnnnvdaooawfmlgivieutcanzwlbumy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623253.550282-417-279320271002972/AnsiballZ_container_config_hash.py'
Jan 05 14:27:34 compute-0 sudo[106007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:34 compute-0 python3.9[106009]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:27:34 compute-0 sudo[106007]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:35 compute-0 sudo[106159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emocrtubevoroodilaejrbzdoqrzvfzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623254.630331-426-52379043494819/AnsiballZ_podman_container_info.py'
Jan 05 14:27:35 compute-0 sudo[106159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:35 compute-0 python3.9[106161]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:27:35 compute-0 sudo[106159]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:36 compute-0 podman[106286]: 2026-01-05 14:27:36.687739845 +0000 UTC m=+0.162268813 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 05 14:27:36 compute-0 sudo[106363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgsxghhkatezkfvcetzldjneucfarqxk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623256.0755768-439-57827243439858/AnsiballZ_edpm_container_manage.py'
Jan 05 14:27:36 compute-0 sudo[106363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:37 compute-0 python3[106365]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:27:37 compute-0 podman[106402]: 2026-01-05 14:27:37.431984198 +0000 UTC m=+0.077264672 container create c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 05 14:27:37 compute-0 podman[106402]: 2026-01-05 14:27:37.394181136 +0000 UTC m=+0.039461660 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 14:27:37 compute-0 python3[106365]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 14:27:37 compute-0 sudo[106363]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:38 compute-0 sudo[106590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvbsoxufrobouiehfouwbthythlsgqyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623257.8503256-447-248181977624689/AnsiballZ_stat.py'
Jan 05 14:27:38 compute-0 sudo[106590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:38 compute-0 python3.9[106592]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:27:38 compute-0 sudo[106590]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:39 compute-0 sudo[106744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktybnxuocjlfeyjjebxhkggusngnctws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623258.7021334-456-157799161786090/AnsiballZ_file.py'
Jan 05 14:27:39 compute-0 sudo[106744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:39 compute-0 python3.9[106746]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:39 compute-0 sudo[106744]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:39 compute-0 sudo[106820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sliwrfoyzqywvmzjkkhunaevebsmgkwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623258.7021334-456-157799161786090/AnsiballZ_stat.py'
Jan 05 14:27:39 compute-0 sudo[106820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:39 compute-0 python3.9[106822]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:27:39 compute-0 sudo[106820]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:40 compute-0 sudo[106971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxlewttasdlpzfsuxbrrzjjroyajdtuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623259.9700394-456-99034921831030/AnsiballZ_copy.py'
Jan 05 14:27:40 compute-0 sudo[106971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:40 compute-0 python3.9[106973]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623259.9700394-456-99034921831030/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:40 compute-0 sudo[106971]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:41 compute-0 sudo[107047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixugzjdvwyxxvsjgampnaacosvrgubsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623259.9700394-456-99034921831030/AnsiballZ_systemd.py'
Jan 05 14:27:41 compute-0 sudo[107047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:41 compute-0 python3.9[107049]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:27:41 compute-0 systemd[1]: Reloading.
Jan 05 14:27:41 compute-0 systemd-rc-local-generator[107074]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:41 compute-0 systemd-sysv-generator[107077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:41 compute-0 sudo[107047]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:41 compute-0 sudo[107158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmtnuoydvuizfnbudizqhoaxawnvfyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623259.9700394-456-99034921831030/AnsiballZ_systemd.py'
Jan 05 14:27:41 compute-0 sudo[107158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:42 compute-0 python3.9[107160]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:27:42 compute-0 systemd[1]: Reloading.
Jan 05 14:27:42 compute-0 systemd-rc-local-generator[107191]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:42 compute-0 systemd-sysv-generator[107194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:42 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 05 14:27:42 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00c67791ea4c89306f337e51c823dfd75b2709b60056f7d50294cb6526cb034/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 05 14:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00c67791ea4c89306f337e51c823dfd75b2709b60056f7d50294cb6526cb034/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 14:27:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.
Jan 05 14:27:42 compute-0 podman[107202]: 2026-01-05 14:27:42.835020178 +0000 UTC m=+0.180847419 container init c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: + sudo -E kolla_set_configs
Jan 05 14:27:42 compute-0 podman[107202]: 2026-01-05 14:27:42.872572145 +0000 UTC m=+0.218399426 container start c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 05 14:27:42 compute-0 edpm-start-podman-container[107202]: ovn_metadata_agent
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Validating config file
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Copying service configuration files
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Writing out command to execute
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: ++ cat /run_command
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: + CMD=neutron-ovn-metadata-agent
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: + ARGS=
Jan 05 14:27:42 compute-0 ovn_metadata_agent[107217]: + sudo kolla_copy_cacerts
Jan 05 14:27:42 compute-0 podman[107224]: 2026-01-05 14:27:42.990958323 +0000 UTC m=+0.097878122 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 05 14:27:42 compute-0 edpm-start-podman-container[107201]: Creating additional drop-in dependency for "ovn_metadata_agent" (c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827)
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: + [[ ! -n '' ]]
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: + . kolla_extend_start
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: Running command: 'neutron-ovn-metadata-agent'
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: + umask 0022
Jan 05 14:27:43 compute-0 ovn_metadata_agent[107217]: + exec neutron-ovn-metadata-agent
Jan 05 14:27:43 compute-0 systemd[1]: Reloading.
Jan 05 14:27:43 compute-0 systemd-sysv-generator[107297]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:43 compute-0 systemd-rc-local-generator[107293]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:43 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 05 14:27:43 compute-0 sudo[107158]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:44 compute-0 python3.9[107453]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.735 107222 INFO neutron.common.config [-] Logging enabled!
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.735 107222 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.735 107222 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.735 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.736 107222 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.737 107222 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.738 107222 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.739 107222 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.740 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.741 107222 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.742 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.743 107222 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.744 107222 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.745 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.746 107222 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.747 107222 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.748 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.749 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.750 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.751 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.752 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.753 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.754 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.755 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.756 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.757 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.758 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.759 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.760 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.761 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.762 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.763 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.764 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.765 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.766 107222 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.767 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.768 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.769 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.770 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.771 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.771 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.771 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.771 107222 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.771 107222 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.780 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.780 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.780 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.781 107222 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.781 107222 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.794 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 82a66401-c715-4a23-aa01-55f1bbd6f669 (UUID: 82a66401-c715-4a23-aa01-55f1bbd6f669) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.816 107222 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.817 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.817 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.817 107222 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.819 107222 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.826 107222 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.833 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '82a66401-c715-4a23-aa01-55f1bbd6f669'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], external_ids={}, name=82a66401-c715-4a23-aa01-55f1bbd6f669, nb_cfg_timestamp=1767623204132, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.833 107222 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fbb88ba7dc0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.834 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.834 107222 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.835 107222 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.835 107222 INFO oslo_service.service [-] Starting 1 workers
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.840 107222 DEBUG oslo_service.service [-] Started child 107530 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.843 107222 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwn5gfec7/privsep.sock']
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.843 107530 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-375028'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.869 107530 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.870 107530 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.870 107530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.873 107530 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.880 107530 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 14:27:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:44.886 107530 INFO eventlet.wsgi.server [-] (107530) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 05 14:27:45 compute-0 sudo[107607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orqndznezvuwtgflqzfaworthpcvsand ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623264.6971676-497-263246857116045/AnsiballZ_stat.py'
Jan 05 14:27:45 compute-0 sudo[107607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:45 compute-0 python3.9[107610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:27:45 compute-0 sudo[107607]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:45 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.494 107222 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.495 107222 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwn5gfec7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.393 107613 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.397 107613 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.399 107613 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.399 107613 INFO oslo.privsep.daemon [-] privsep daemon running as pid 107613
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.500 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[d55560cc-2eea-4e56-bc7c-89d03547d5c5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:27:45 compute-0 sudo[107738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzwecskergyvyvojkwnffwquntyzgkky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623264.6971676-497-263246857116045/AnsiballZ_copy.py'
Jan 05 14:27:45 compute-0 sudo[107738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.959 107613 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.959 107613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:27:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:45.959 107613 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:27:45 compute-0 python3.9[107740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623264.6971676-497-263246857116045/.source.yaml _original_basename=.dftzavnw follow=False checksum=2e152f686a18a17500f206af768dc46a6557d37b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:27:46 compute-0 sudo[107738]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.437 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[efc19358-32a4-4f86-aaff-c68c2317f463]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.440 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, column=external_ids, values=({'neutron:ovn-metadata-id': '566f31e6-0605-5bc3-acc6-9de8a4d86aa9'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.459 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.468 107222 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.469 107222 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.469 107222 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.469 107222 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.469 107222 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.470 107222 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.470 107222 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.471 107222 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.471 107222 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.471 107222 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.472 107222 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.472 107222 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.472 107222 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.473 107222 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.473 107222 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.473 107222 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.474 107222 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.474 107222 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.474 107222 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.474 107222 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.475 107222 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.475 107222 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.475 107222 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.475 107222 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.476 107222 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.476 107222 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.477 107222 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.477 107222 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.477 107222 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.478 107222 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.478 107222 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.478 107222 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.479 107222 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.479 107222 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.479 107222 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.479 107222 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.480 107222 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.480 107222 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.480 107222 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.481 107222 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.481 107222 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.481 107222 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.482 107222 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.482 107222 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.482 107222 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.482 107222 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.483 107222 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.483 107222 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.483 107222 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.483 107222 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.484 107222 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.484 107222 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.484 107222 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.485 107222 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.485 107222 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.485 107222 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.485 107222 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.485 107222 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.486 107222 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.486 107222 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.486 107222 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.486 107222 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.487 107222 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.487 107222 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.487 107222 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.487 107222 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.488 107222 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.488 107222 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.488 107222 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.489 107222 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.489 107222 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.489 107222 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.489 107222 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.490 107222 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.490 107222 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.490 107222 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.490 107222 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.491 107222 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.491 107222 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.491 107222 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.491 107222 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.492 107222 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.492 107222 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.492 107222 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.492 107222 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.493 107222 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.493 107222 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.493 107222 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.493 107222 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.493 107222 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.494 107222 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.494 107222 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.494 107222 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.494 107222 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.495 107222 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.495 107222 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.495 107222 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.495 107222 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.496 107222 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 sshd-session[98796]: Connection closed by 192.168.122.30 port 48462
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.497 107222 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.498 107222 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.498 107222 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.498 107222 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.499 107222 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.499 107222 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.500 107222 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.500 107222 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.500 107222 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:27:46 compute-0 sshd-session[98793]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.501 107222 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.501 107222 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.502 107222 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.502 107222 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.503 107222 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.503 107222 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.503 107222 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.504 107222 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.504 107222 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.504 107222 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.505 107222 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.505 107222 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.506 107222 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.506 107222 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.506 107222 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 05 14:27:46 compute-0 systemd[1]: session-22.scope: Consumed 44.159s CPU time.
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.507 107222 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.508 107222 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.508 107222 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 systemd-logind[795]: Session 22 logged out. Waiting for processes to exit.
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.509 107222 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.509 107222 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.510 107222 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.510 107222 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.510 107222 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.511 107222 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.511 107222 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 systemd-logind[795]: Removed session 22.
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.511 107222 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.512 107222 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.512 107222 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.513 107222 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.513 107222 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.513 107222 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.514 107222 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.514 107222 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.514 107222 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.515 107222 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.515 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.515 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.516 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.516 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.517 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.517 107222 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.517 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.518 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.518 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.518 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.519 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.519 107222 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.519 107222 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.520 107222 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.520 107222 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.520 107222 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.521 107222 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.522 107222 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.522 107222 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.522 107222 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.522 107222 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.523 107222 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.523 107222 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.523 107222 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.523 107222 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.524 107222 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.524 107222 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.524 107222 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.524 107222 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.525 107222 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.525 107222 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.525 107222 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.525 107222 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.526 107222 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.526 107222 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.526 107222 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.526 107222 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.527 107222 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.527 107222 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.527 107222 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.527 107222 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.528 107222 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.528 107222 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.528 107222 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.528 107222 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.529 107222 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.529 107222 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.529 107222 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.529 107222 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.530 107222 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.530 107222 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.530 107222 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.530 107222 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.531 107222 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.531 107222 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.531 107222 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.531 107222 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.531 107222 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.532 107222 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.532 107222 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.532 107222 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.532 107222 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.533 107222 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.533 107222 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.533 107222 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.533 107222 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.534 107222 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.534 107222 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.534 107222 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.534 107222 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.534 107222 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.535 107222 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.535 107222 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.535 107222 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.535 107222 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.536 107222 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.536 107222 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.536 107222 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.536 107222 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.536 107222 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.537 107222 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.537 107222 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.537 107222 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.537 107222 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.538 107222 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.538 107222 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.538 107222 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.538 107222 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.538 107222 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.539 107222 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.539 107222 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.539 107222 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.539 107222 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.540 107222 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.540 107222 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.540 107222 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.540 107222 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.541 107222 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.541 107222 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.541 107222 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.541 107222 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.542 107222 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.542 107222 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.542 107222 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.542 107222 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.542 107222 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.543 107222 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.543 107222 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.543 107222 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.543 107222 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.544 107222 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.544 107222 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.544 107222 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.544 107222 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.545 107222 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.545 107222 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.545 107222 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.545 107222 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.545 107222 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.546 107222 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.546 107222 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.546 107222 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.546 107222 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.547 107222 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.547 107222 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.547 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.547 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.548 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.548 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.548 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.548 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.549 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.549 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.549 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.549 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.550 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.550 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.550 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.550 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.550 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.551 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.551 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.551 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.551 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.552 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.552 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.552 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.552 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.553 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.553 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.553 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.553 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.554 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.554 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.554 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.554 107222 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.554 107222 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.555 107222 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.555 107222 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.555 107222 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:27:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:27:46.555 107222 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:27:52 compute-0 sshd-session[107765]: Accepted publickey for zuul from 192.168.122.30 port 33152 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:27:52 compute-0 systemd-logind[795]: New session 23 of user zuul.
Jan 05 14:27:52 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 05 14:27:52 compute-0 sshd-session[107765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:27:53 compute-0 python3.9[107918]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:27:54 compute-0 sudo[108072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqdgqatesgudrrsfiqspnnzzvkzthool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623273.919138-34-182204403648065/AnsiballZ_command.py'
Jan 05 14:27:54 compute-0 sudo[108072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:54 compute-0 python3.9[108074]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:27:54 compute-0 sudo[108072]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:55 compute-0 sudo[108235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcoyxckygjtgaaxwsilkrggcxokxzldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623275.2993205-45-225100712053915/AnsiballZ_systemd_service.py'
Jan 05 14:27:55 compute-0 sudo[108235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:27:56 compute-0 python3.9[108237]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:27:56 compute-0 systemd[1]: Reloading.
Jan 05 14:27:56 compute-0 systemd-rc-local-generator[108266]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:27:56 compute-0 systemd-sysv-generator[108270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:27:56 compute-0 sudo[108235]: pam_unix(sudo:session): session closed for user root
Jan 05 14:27:57 compute-0 python3.9[108422]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:27:57 compute-0 network[108439]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:27:57 compute-0 network[108440]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:27:57 compute-0 network[108441]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:28:01 compute-0 sudo[108700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuaihungodyhbmtcskhhnaaapomroedh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623281.4874136-64-268029297600297/AnsiballZ_systemd_service.py'
Jan 05 14:28:01 compute-0 sudo[108700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:02 compute-0 python3.9[108702]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:02 compute-0 sudo[108700]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:02 compute-0 sudo[108853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agujdhrywtdjulxwsmxxecvityhjaqcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623282.3957648-64-260235360608829/AnsiballZ_systemd_service.py'
Jan 05 14:28:02 compute-0 sudo[108853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:03 compute-0 python3.9[108855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:03 compute-0 sudo[108853]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:03 compute-0 sudo[109006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivmpryxzjmdkthvscleeqrkwjckkjcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623283.2795553-64-140688172239975/AnsiballZ_systemd_service.py'
Jan 05 14:28:03 compute-0 sudo[109006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:03 compute-0 python3.9[109008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:04 compute-0 sudo[109006]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:04 compute-0 sudo[109159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqyvwxtlmsygcuyiqnanwftamihxsqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623284.214684-64-275543370626342/AnsiballZ_systemd_service.py'
Jan 05 14:28:04 compute-0 sudo[109159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:04 compute-0 python3.9[109161]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:04 compute-0 sudo[109159]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:05 compute-0 sudo[109312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iraecukrybmtegmdehxbxolfvzpskkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623285.047254-64-57419045514572/AnsiballZ_systemd_service.py'
Jan 05 14:28:05 compute-0 sudo[109312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:05 compute-0 python3.9[109314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:05 compute-0 sudo[109312]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:06 compute-0 sudo[109465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxbrhshdroitgrdcsnbhhktccnxekkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623285.9187596-64-86334496760030/AnsiballZ_systemd_service.py'
Jan 05 14:28:06 compute-0 sudo[109465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:06 compute-0 python3.9[109467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:06 compute-0 sudo[109465]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:07 compute-0 sudo[109631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfwkaqawomxlavqoxvhvawtikxwqlwzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623286.8609366-64-146711837418673/AnsiballZ_systemd_service.py'
Jan 05 14:28:07 compute-0 sudo[109631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:07 compute-0 podman[109592]: 2026-01-05 14:28:07.398491683 +0000 UTC m=+0.163498939 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:28:07 compute-0 python3.9[109639]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:28:07 compute-0 sudo[109631]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:08 compute-0 sudo[109798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznenxhqoadchbwltfmrptkzmnnukdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623288.027775-116-13149147256117/AnsiballZ_file.py'
Jan 05 14:28:08 compute-0 sudo[109798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:08 compute-0 python3.9[109800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:08 compute-0 sudo[109798]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:09 compute-0 sudo[109950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwvuxfjmyndonsdtyqvqqyfzrtugmgel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623289.0111184-116-201567642154345/AnsiballZ_file.py'
Jan 05 14:28:09 compute-0 sudo[109950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:09 compute-0 python3.9[109952]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:09 compute-0 sudo[109950]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:10 compute-0 sudo[110102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjkjkdgjeqolndlmajedryorzikmvwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623289.8166468-116-264647457158321/AnsiballZ_file.py'
Jan 05 14:28:10 compute-0 sudo[110102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:10 compute-0 python3.9[110104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:10 compute-0 sudo[110102]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:10 compute-0 sudo[110254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jymlelunamlxnfrjtapegyzfucpvfzmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623290.5384784-116-195260088758457/AnsiballZ_file.py'
Jan 05 14:28:10 compute-0 sudo[110254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:11 compute-0 python3.9[110256]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:11 compute-0 sudo[110254]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:11 compute-0 sudo[110406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phsgcrhrcffuowvqjfmvxeylkpiyhybj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623291.3179576-116-160725784979138/AnsiballZ_file.py'
Jan 05 14:28:11 compute-0 sudo[110406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:11 compute-0 python3.9[110408]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:11 compute-0 sudo[110406]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:12 compute-0 sudo[110558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuuvfceyplfqjlzodqvfjkdgwldfkfka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623292.0722885-116-266262776676187/AnsiballZ_file.py'
Jan 05 14:28:12 compute-0 sudo[110558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:12 compute-0 python3.9[110560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:12 compute-0 sudo[110558]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:13 compute-0 podman[110684]: 2026-01-05 14:28:13.161107204 +0000 UTC m=+0.063947367 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 14:28:13 compute-0 sudo[110727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckbofjhfkwylujwavecjhdzjogdkhwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623292.7939339-116-212244442980227/AnsiballZ_file.py'
Jan 05 14:28:13 compute-0 sudo[110727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:13 compute-0 python3.9[110731]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:13 compute-0 sudo[110727]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:13 compute-0 sudo[110882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdlnnxgiszketekhgytfxhaybaizlfuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623293.5508504-166-78763419203839/AnsiballZ_file.py'
Jan 05 14:28:13 compute-0 sudo[110882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:14 compute-0 python3.9[110884]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:14 compute-0 sudo[110882]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:14 compute-0 sudo[111034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rliijrzyowyuywlvvnuitkehtwdpzwth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623294.3334718-166-183538483211472/AnsiballZ_file.py'
Jan 05 14:28:14 compute-0 sudo[111034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:14 compute-0 python3.9[111036]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:14 compute-0 sudo[111034]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:15 compute-0 sudo[111186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wocjodqhdpdnezmoquhyqpvhnqzifhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623295.1035416-166-48182442839751/AnsiballZ_file.py'
Jan 05 14:28:15 compute-0 sudo[111186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:15 compute-0 python3.9[111188]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:15 compute-0 sudo[111186]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:16 compute-0 sudo[111338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffgtlixzcbubgceoudxcrkkhvzcdiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623295.8992522-166-135626806543525/AnsiballZ_file.py'
Jan 05 14:28:16 compute-0 sudo[111338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:16 compute-0 python3.9[111340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:16 compute-0 sudo[111338]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:17 compute-0 sudo[111490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbqrgqghtxxfhhrqzzfglgshkuhwexiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623296.6978366-166-65139033438789/AnsiballZ_file.py'
Jan 05 14:28:17 compute-0 sudo[111490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:17 compute-0 python3.9[111492]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:17 compute-0 sudo[111490]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:17 compute-0 sudo[111642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chpsikshdgcnraxinkygwdviuwtmvkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623297.5437808-166-99749544831800/AnsiballZ_file.py'
Jan 05 14:28:17 compute-0 sudo[111642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:18 compute-0 python3.9[111644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:18 compute-0 sudo[111642]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:18 compute-0 sudo[111794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jacycxekbhsvlqteawtehjjadbjgxtip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623298.298164-166-74462641613183/AnsiballZ_file.py'
Jan 05 14:28:18 compute-0 sudo[111794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:18 compute-0 python3.9[111796]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:28:18 compute-0 sudo[111794]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:19 compute-0 sudo[111946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkpualkvvcqdvtljrramilylnvsbrzqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623299.2237-217-105720645672502/AnsiballZ_command.py'
Jan 05 14:28:19 compute-0 sudo[111946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:19 compute-0 python3.9[111948]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:19 compute-0 sudo[111946]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:20 compute-0 python3.9[112101]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:28:21 compute-0 sudo[112251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyaonvxgqtiokchwvksrkikyqwxuaxkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623301.0464714-235-262008642717596/AnsiballZ_systemd_service.py'
Jan 05 14:28:21 compute-0 sudo[112251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:21 compute-0 python3.9[112253]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:28:21 compute-0 systemd[1]: Reloading.
Jan 05 14:28:21 compute-0 systemd-rc-local-generator[112276]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:28:21 compute-0 systemd-sysv-generator[112283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:28:22 compute-0 sudo[112251]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:22 compute-0 sudo[112437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjcfdhiywwwojgcklwcgtspveeuhmsll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623302.244561-243-139054761753834/AnsiballZ_command.py'
Jan 05 14:28:22 compute-0 sudo[112437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:22 compute-0 python3.9[112439]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:22 compute-0 sudo[112437]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:23 compute-0 sudo[112590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujptuiatwaeilprtugxamfbtbzpixpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623302.9403563-243-207443485732885/AnsiballZ_command.py'
Jan 05 14:28:23 compute-0 sudo[112590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:23 compute-0 python3.9[112592]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:23 compute-0 sudo[112590]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:24 compute-0 sudo[112743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlowygfnzncdvcotzdcwntlbhdvcxvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623303.748911-243-163203952574165/AnsiballZ_command.py'
Jan 05 14:28:24 compute-0 sudo[112743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:24 compute-0 python3.9[112745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:24 compute-0 sudo[112743]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:24 compute-0 sudo[112896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxfxdupoqeshrdukdpneqjcrsgzqymrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623304.6783848-243-208771473182249/AnsiballZ_command.py'
Jan 05 14:28:24 compute-0 sudo[112896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:25 compute-0 python3.9[112898]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:25 compute-0 sudo[112896]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:25 compute-0 sudo[113049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsmegyqbbbivrbwpgygfavvruxqycbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623305.3242803-243-266119966238485/AnsiballZ_command.py'
Jan 05 14:28:25 compute-0 sudo[113049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:25 compute-0 python3.9[113051]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:25 compute-0 sudo[113049]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:26 compute-0 sudo[113202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfxmxooayblxdminozhcfmhjejbbegst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623306.022181-243-192534853399568/AnsiballZ_command.py'
Jan 05 14:28:26 compute-0 sudo[113202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:26 compute-0 python3.9[113204]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:26 compute-0 sudo[113202]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:27 compute-0 sudo[113355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxntougqsmtsqmgnwjvaeblimrohhobh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623306.825979-243-21292727573002/AnsiballZ_command.py'
Jan 05 14:28:27 compute-0 sudo[113355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:27 compute-0 python3.9[113357]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:28:27 compute-0 sudo[113355]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:28 compute-0 sudo[113508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjkpqlfqoyvofrohcjawirwjgzyacyud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623307.9687214-297-161474868207280/AnsiballZ_getent.py'
Jan 05 14:28:28 compute-0 sudo[113508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:28 compute-0 python3.9[113510]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 05 14:28:28 compute-0 sudo[113508]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:29 compute-0 sudo[113661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaeykvkzkofwxrtqfkrhaqdihgawpbez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623309.0136082-305-211808770683936/AnsiballZ_group.py'
Jan 05 14:28:29 compute-0 sudo[113661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:29 compute-0 python3.9[113663]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:28:29 compute-0 groupadd[113664]: group added to /etc/group: name=libvirt, GID=42473
Jan 05 14:28:29 compute-0 groupadd[113664]: group added to /etc/gshadow: name=libvirt
Jan 05 14:28:29 compute-0 groupadd[113664]: new group: name=libvirt, GID=42473
Jan 05 14:28:29 compute-0 sudo[113661]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:30 compute-0 sudo[113819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrjabhxrzkspdakwjqyyhducaogoyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623310.0312037-313-186923802383657/AnsiballZ_user.py'
Jan 05 14:28:30 compute-0 sudo[113819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:30 compute-0 python3.9[113821]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 14:28:30 compute-0 useradd[113823]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 14:28:30 compute-0 sudo[113819]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:31 compute-0 sudo[113979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvgucgimwoqojwqvxfhgwvqfttjzcea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623311.368313-324-11208006518916/AnsiballZ_setup.py'
Jan 05 14:28:31 compute-0 sudo[113979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:32 compute-0 python3.9[113981]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:28:32 compute-0 sudo[113979]: pam_unix(sudo:session): session closed for user root
Jan 05 14:28:32 compute-0 sudo[114063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmqrjjsxtatoorpboefnkzgxbaanxdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623311.368313-324-11208006518916/AnsiballZ_dnf.py'
Jan 05 14:28:32 compute-0 sudo[114063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:28:33 compute-0 python3.9[114065]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:28:37 compute-0 podman[114077]: 2026-01-05 14:28:37.673649525 +0000 UTC m=+0.156963976 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:28:43 compute-0 podman[114168]: 2026-01-05 14:28:43.602467427 +0000 UTC m=+0.083563039 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 14:28:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:28:44.773 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:28:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:28:44.774 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:28:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:28:44.774 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:29:01 compute-0 kernel: SELinux:  Converting 2755 SID table entries...
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:29:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:29:08 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 05 14:29:08 compute-0 podman[114311]: 2026-01-05 14:29:08.678743722 +0000 UTC m=+0.146837396 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:29:10 compute-0 kernel: SELinux:  Converting 2755 SID table entries...
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:29:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:29:14 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 05 14:29:14 compute-0 podman[114343]: 2026-01-05 14:29:14.634568938 +0000 UTC m=+0.098007387 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:29:39 compute-0 podman[122752]: 2026-01-05 14:29:39.647321129 +0000 UTC m=+0.136750752 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:29:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:29:44.773 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:29:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:29:44.774 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:29:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:29:44.774 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:29:45 compute-0 podman[125827]: 2026-01-05 14:29:45.60698087 +0000 UTC m=+0.087154132 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 14:30:10 compute-0 kernel: SELinux:  Converting 2756 SID table entries...
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 14:30:10 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 14:30:10 compute-0 podman[131263]: 2026-01-05 14:30:10.717808775 +0000 UTC m=+0.202885043 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 05 14:30:10 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Jan 05 14:30:11 compute-0 groupadd[131295]: group added to /etc/group: name=dnsmasq, GID=993
Jan 05 14:30:11 compute-0 groupadd[131295]: group added to /etc/gshadow: name=dnsmasq
Jan 05 14:30:11 compute-0 groupadd[131295]: new group: name=dnsmasq, GID=993
Jan 05 14:30:11 compute-0 useradd[131302]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 05 14:30:11 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:30:11 compute-0 dbus-broker-launch[738]: Noticed file-system modification, trigger reload.
Jan 05 14:30:12 compute-0 groupadd[131315]: group added to /etc/group: name=clevis, GID=992
Jan 05 14:30:12 compute-0 groupadd[131315]: group added to /etc/gshadow: name=clevis
Jan 05 14:30:12 compute-0 groupadd[131315]: new group: name=clevis, GID=992
Jan 05 14:30:12 compute-0 useradd[131322]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 05 14:30:12 compute-0 usermod[131332]: add 'clevis' to group 'tss'
Jan 05 14:30:12 compute-0 usermod[131332]: add 'clevis' to shadow group 'tss'
Jan 05 14:30:15 compute-0 polkitd[43557]: Reloading rules
Jan 05 14:30:15 compute-0 polkitd[43557]: Collecting garbage unconditionally...
Jan 05 14:30:15 compute-0 polkitd[43557]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 14:30:15 compute-0 polkitd[43557]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 14:30:15 compute-0 polkitd[43557]: Finished loading, compiling and executing 3 rules
Jan 05 14:30:15 compute-0 polkitd[43557]: Reloading rules
Jan 05 14:30:15 compute-0 polkitd[43557]: Collecting garbage unconditionally...
Jan 05 14:30:15 compute-0 polkitd[43557]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 14:30:15 compute-0 polkitd[43557]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 14:30:15 compute-0 polkitd[43557]: Finished loading, compiling and executing 3 rules
Jan 05 14:30:15 compute-0 podman[131426]: 2026-01-05 14:30:15.803117506 +0000 UTC m=+0.095962712 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 14:30:16 compute-0 groupadd[131538]: group added to /etc/group: name=ceph, GID=167
Jan 05 14:30:16 compute-0 groupadd[131538]: group added to /etc/gshadow: name=ceph
Jan 05 14:30:16 compute-0 groupadd[131538]: new group: name=ceph, GID=167
Jan 05 14:30:16 compute-0 useradd[131544]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 05 14:30:19 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 05 14:30:19 compute-0 sshd[1006]: Received signal 15; terminating.
Jan 05 14:30:19 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 05 14:30:19 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 05 14:30:19 compute-0 systemd[1]: sshd.service: Consumed 2.292s CPU time, read 32.0K from disk, written 0B to disk.
Jan 05 14:30:19 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 05 14:30:19 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 05 14:30:19 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 14:30:19 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 14:30:19 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 14:30:19 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 05 14:30:19 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 05 14:30:20 compute-0 sshd[132063]: Server listening on 0.0.0.0 port 22.
Jan 05 14:30:20 compute-0 sshd[132063]: Server listening on :: port 22.
Jan 05 14:30:20 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 05 14:30:21 compute-0 sshd-session[132218]: Invalid user solv from 165.22.168.95 port 40972
Jan 05 14:30:22 compute-0 sshd-session[132218]: Connection closed by invalid user solv 165.22.168.95 port 40972 [preauth]
Jan 05 14:30:22 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:30:22 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:30:22 compute-0 systemd[1]: Reloading.
Jan 05 14:30:22 compute-0 systemd-rc-local-generator[132325]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:22 compute-0 systemd-sysv-generator[132328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:30:25 compute-0 sudo[114063]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:26 compute-0 sudo[135829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpnjhmuynxwfduxwudsyhbndyzilvwvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623426.0105195-336-214564873776720/AnsiballZ_systemd.py'
Jan 05 14:30:26 compute-0 sudo[135829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:26 compute-0 python3.9[135853]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:30:26 compute-0 systemd[1]: Reloading.
Jan 05 14:30:27 compute-0 systemd-rc-local-generator[136243]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:27 compute-0 systemd-sysv-generator[136247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:27 compute-0 sudo[135829]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:27 compute-0 sudo[136930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgzlzobdsxefjsejcyglvosxpoqplja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623427.4886746-336-126033334723292/AnsiballZ_systemd.py'
Jan 05 14:30:27 compute-0 sudo[136930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:28 compute-0 python3.9[136956]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:30:28 compute-0 systemd[1]: Reloading.
Jan 05 14:30:28 compute-0 systemd-rc-local-generator[137294]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:28 compute-0 systemd-sysv-generator[137302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:28 compute-0 sudo[136930]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:29 compute-0 sudo[138056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxtzhzsmftmucegjtenqzttvwacimmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623428.7516813-336-96027372846825/AnsiballZ_systemd.py'
Jan 05 14:30:29 compute-0 sudo[138056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:29 compute-0 python3.9[138082]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:30:29 compute-0 systemd[1]: Reloading.
Jan 05 14:30:29 compute-0 systemd-rc-local-generator[138435]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:29 compute-0 systemd-sysv-generator[138438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:29 compute-0 sudo[138056]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:30 compute-0 sudo[139114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myxwnvmhcfuluxdqndnuxykjnzvoylrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623430.0090122-336-154890089959235/AnsiballZ_systemd.py'
Jan 05 14:30:30 compute-0 sudo[139114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:30 compute-0 python3.9[139136]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:30:30 compute-0 systemd[1]: Reloading.
Jan 05 14:30:30 compute-0 systemd-rc-local-generator[139446]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:30 compute-0 systemd-sysv-generator[139452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:31 compute-0 sudo[139114]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:31 compute-0 sudo[140219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbeiwuxwktgkmolgyxbmnpaqwakksbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623431.3288696-365-263394081261239/AnsiballZ_systemd.py'
Jan 05 14:30:31 compute-0 sudo[140219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:32 compute-0 python3.9[140246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:32 compute-0 systemd[1]: Reloading.
Jan 05 14:30:32 compute-0 systemd-rc-local-generator[140757]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:32 compute-0 systemd-sysv-generator[140760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:32 compute-0 sudo[140219]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:32 compute-0 sudo[141395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwqgjrvlwqqauaguzglxwnzedyszzyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623432.5333471-365-40915651978350/AnsiballZ_systemd.py'
Jan 05 14:30:32 compute-0 sudo[141395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:33 compute-0 python3.9[141410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:33 compute-0 systemd[1]: Reloading.
Jan 05 14:30:33 compute-0 systemd-sysv-generator[141837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:33 compute-0 systemd-rc-local-generator[141833]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:30:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:30:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 13.871s CPU time.
Jan 05 14:30:33 compute-0 systemd[1]: run-r4a1d9bdee1f441ee8f10081d7b9a23bb.service: Deactivated successfully.
Jan 05 14:30:33 compute-0 sudo[141395]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:34 compute-0 sudo[141993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstoucweirlkagztbdpuoclafcsktkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623433.8920133-365-95040265595854/AnsiballZ_systemd.py'
Jan 05 14:30:34 compute-0 sudo[141993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:34 compute-0 python3.9[141995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:34 compute-0 systemd[1]: Reloading.
Jan 05 14:30:34 compute-0 systemd-rc-local-generator[142025]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:34 compute-0 systemd-sysv-generator[142030]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:35 compute-0 sudo[141993]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:35 compute-0 sudo[142183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjxqmycdmpwolcfipybdysdaxcbfzrkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623435.3031843-365-66000481185359/AnsiballZ_systemd.py'
Jan 05 14:30:35 compute-0 sudo[142183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:35 compute-0 python3.9[142185]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:36 compute-0 sudo[142183]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:36 compute-0 sudo[142338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmqobdybiryslafpmsmgknacgexazao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623436.183308-365-261310984161241/AnsiballZ_systemd.py'
Jan 05 14:30:36 compute-0 sudo[142338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:36 compute-0 python3.9[142340]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:37 compute-0 systemd[1]: Reloading.
Jan 05 14:30:37 compute-0 systemd-rc-local-generator[142371]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:37 compute-0 systemd-sysv-generator[142375]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:37 compute-0 sudo[142338]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:37 compute-0 sudo[142529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdrudaccuwtrkftivnauceizioqavlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623437.5280454-401-213997606339422/AnsiballZ_systemd.py'
Jan 05 14:30:37 compute-0 sudo[142529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:38 compute-0 python3.9[142531]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 14:30:38 compute-0 systemd[1]: Reloading.
Jan 05 14:30:38 compute-0 systemd-sysv-generator[142563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:30:38 compute-0 systemd-rc-local-generator[142560]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:30:38 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 05 14:30:38 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 05 14:30:38 compute-0 sudo[142529]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:39 compute-0 sudo[142721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iztnllznfgsuekqtrjmjcakvfpgqmwkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623438.9992821-409-126087173811697/AnsiballZ_systemd.py'
Jan 05 14:30:39 compute-0 sudo[142721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:39 compute-0 python3.9[142723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:40 compute-0 sudo[142721]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:40 compute-0 podman[142727]: 2026-01-05 14:30:40.908915977 +0000 UTC m=+0.124971970 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:30:41 compute-0 sudo[142903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhhexgcuiihdvzmdwopqjwvfbhwbmln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623441.0227642-409-3556730235441/AnsiballZ_systemd.py'
Jan 05 14:30:41 compute-0 sudo[142903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:41 compute-0 python3.9[142905]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:41 compute-0 sudo[142903]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:42 compute-0 sudo[143058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdmktfqmffafukmqyicnapmcjngsrcji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623442.0494273-409-15237337104701/AnsiballZ_systemd.py'
Jan 05 14:30:42 compute-0 sudo[143058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:42 compute-0 python3.9[143060]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:42 compute-0 sudo[143058]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:43 compute-0 sudo[143213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrwqsdxcxedtftoqdvvllqxdhzonmlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623443.0886493-409-95500943835858/AnsiballZ_systemd.py'
Jan 05 14:30:43 compute-0 sudo[143213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:43 compute-0 python3.9[143215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:43 compute-0 sudo[143213]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:44 compute-0 sudo[143368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvyamajhauqtqllvpfgkcboqfmawyay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623444.03451-409-271302260291612/AnsiballZ_systemd.py'
Jan 05 14:30:44 compute-0 sudo[143368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:44 compute-0 python3.9[143370]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:30:44.774 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:30:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:30:44.775 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:30:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:30:44.775 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:30:44 compute-0 sudo[143368]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:45 compute-0 sudo[143523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwtrgbqasqfirwnscrcicuystjmvtfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623445.0017505-409-268928218577846/AnsiballZ_systemd.py'
Jan 05 14:30:45 compute-0 sudo[143523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:45 compute-0 python3.9[143525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:46 compute-0 podman[143527]: 2026-01-05 14:30:46.606897173 +0000 UTC m=+0.090869367 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 14:30:46 compute-0 sudo[143523]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:47 compute-0 sudo[143699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jracsnxraehpwnauldmxhajirrqnjxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623447.002598-409-268704419094177/AnsiballZ_systemd.py'
Jan 05 14:30:47 compute-0 sudo[143699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:47 compute-0 python3.9[143701]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:48 compute-0 sudo[143699]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:49 compute-0 sudo[143854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaihfuxeyzwgwwzxafojkjkakvbcrtll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623449.0432103-409-239738420307224/AnsiballZ_systemd.py'
Jan 05 14:30:49 compute-0 sudo[143854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:49 compute-0 python3.9[143856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:49 compute-0 sudo[143854]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:50 compute-0 sudo[144009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsyugbiusbuiecuelfeozxpseauptaki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623450.0246253-409-248268681065850/AnsiballZ_systemd.py'
Jan 05 14:30:50 compute-0 sudo[144009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:50 compute-0 python3.9[144011]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:50 compute-0 sudo[144009]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:51 compute-0 sudo[144164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywculenuichefrfxenxpeszbwzrhpnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623451.086249-409-191271436360371/AnsiballZ_systemd.py'
Jan 05 14:30:51 compute-0 sudo[144164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:51 compute-0 python3.9[144166]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:51 compute-0 sudo[144164]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:52 compute-0 sudo[144319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omvftmqauqjlgqjyzvuoogvrclokdrvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623452.0767152-409-166781362557506/AnsiballZ_systemd.py'
Jan 05 14:30:52 compute-0 sudo[144319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:52 compute-0 python3.9[144321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:52 compute-0 sudo[144319]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:53 compute-0 sudo[144474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egodbmomioaohrtznyevowkruwtfidzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623453.1369119-409-58300840529075/AnsiballZ_systemd.py'
Jan 05 14:30:53 compute-0 sudo[144474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:53 compute-0 python3.9[144476]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:53 compute-0 sudo[144474]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:54 compute-0 sudo[144629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arvoqufrjsrjqxdfxhfwdrxyvxljjihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623454.1619685-409-133464682000635/AnsiballZ_systemd.py'
Jan 05 14:30:54 compute-0 sudo[144629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:54 compute-0 python3.9[144631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:54 compute-0 sudo[144629]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:55 compute-0 sudo[144784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdhgsfykqdjxrkydimavxxgunfphmne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623455.1685002-409-161182526392858/AnsiballZ_systemd.py'
Jan 05 14:30:55 compute-0 sudo[144784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:55 compute-0 python3.9[144786]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 14:30:55 compute-0 sudo[144784]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:56 compute-0 sudo[144939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leblbdwfkttoinppqpagqkokasmgqzgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623456.3287983-511-26873424513144/AnsiballZ_file.py'
Jan 05 14:30:56 compute-0 sudo[144939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:56 compute-0 python3.9[144941]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:30:57 compute-0 sudo[144939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:57 compute-0 sudo[145091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dffhjzuppjahoekncbqtxwlatzhgcfdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623457.2162578-511-277000037719428/AnsiballZ_file.py'
Jan 05 14:30:57 compute-0 sudo[145091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:57 compute-0 python3.9[145093]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:30:57 compute-0 sudo[145091]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:58 compute-0 sudo[145243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrbnmfvcbgxfznqnluvkwzidfydunrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623458.035512-511-245176811328039/AnsiballZ_file.py'
Jan 05 14:30:58 compute-0 sudo[145243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:58 compute-0 python3.9[145245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:30:58 compute-0 sudo[145243]: pam_unix(sudo:session): session closed for user root
Jan 05 14:30:59 compute-0 sudo[145395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrmvcjtklgugnttbbgchdmesaliaxfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623458.8550994-511-210674111557383/AnsiballZ_file.py'
Jan 05 14:30:59 compute-0 sudo[145395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:30:59 compute-0 python3.9[145397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:30:59 compute-0 sudo[145395]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:00 compute-0 sudo[145547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkekqrvwmojhbbrjdhonrifglqiztmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623459.6774626-511-177374409768655/AnsiballZ_file.py'
Jan 05 14:31:00 compute-0 sudo[145547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:00 compute-0 python3.9[145549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:31:00 compute-0 sudo[145547]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:00 compute-0 sudo[145699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nareoynsfappuhplxsbpuwpxkuhzhbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623460.4704318-511-29902305179647/AnsiballZ_file.py'
Jan 05 14:31:00 compute-0 sudo[145699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:01 compute-0 python3.9[145701]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:31:01 compute-0 sudo[145699]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:01 compute-0 sudo[145851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpumrrrtgdpeukzxytmvhdfwtiksxyvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623461.2987554-554-271202328050807/AnsiballZ_stat.py'
Jan 05 14:31:01 compute-0 sudo[145851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:02 compute-0 python3.9[145853]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:02 compute-0 sudo[145851]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:02 compute-0 sudo[145976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hakrjapznuzxzjbbpxqwdkmwupqxeint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623461.2987554-554-271202328050807/AnsiballZ_copy.py'
Jan 05 14:31:02 compute-0 sudo[145976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:02 compute-0 python3.9[145978]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623461.2987554-554-271202328050807/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:02 compute-0 sudo[145976]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:03 compute-0 sudo[146128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtrpybcdyywjrlxgswbutnftxviqmzts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623463.0849724-554-218695656083289/AnsiballZ_stat.py'
Jan 05 14:31:03 compute-0 sudo[146128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:03 compute-0 python3.9[146130]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:03 compute-0 sudo[146128]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:04 compute-0 sudo[146253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwodasjemxgciuswlvzbwaxwsmsfhgmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623463.0849724-554-218695656083289/AnsiballZ_copy.py'
Jan 05 14:31:04 compute-0 sudo[146253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:04 compute-0 python3.9[146255]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623463.0849724-554-218695656083289/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:04 compute-0 sudo[146253]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:05 compute-0 sudo[146405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpyldtnmlsqrnbdzdjlcyorpxfpozpqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623464.6681643-554-272125263682872/AnsiballZ_stat.py'
Jan 05 14:31:05 compute-0 sudo[146405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:05 compute-0 python3.9[146407]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:05 compute-0 sudo[146405]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:05 compute-0 sudo[146530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyiklfdvlpujezivcdbqaqwuffpyedhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623464.6681643-554-272125263682872/AnsiballZ_copy.py'
Jan 05 14:31:05 compute-0 sudo[146530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:05 compute-0 python3.9[146532]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623464.6681643-554-272125263682872/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:06 compute-0 sudo[146530]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:06 compute-0 sudo[146682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsuuetcueolanjlzaracfntcibuzxuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623466.2196581-554-267966999585192/AnsiballZ_stat.py'
Jan 05 14:31:06 compute-0 sudo[146682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:06 compute-0 python3.9[146684]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:06 compute-0 sudo[146682]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:07 compute-0 sudo[146807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfgzkvkwkvtpcwyfpeabzgorgxwybma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623466.2196581-554-267966999585192/AnsiballZ_copy.py'
Jan 05 14:31:07 compute-0 sudo[146807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:07 compute-0 python3.9[146809]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623466.2196581-554-267966999585192/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:07 compute-0 sudo[146807]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:08 compute-0 sudo[146959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwmatnagsecbnzbeoftshnqswyayhzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623467.7518597-554-165035400067351/AnsiballZ_stat.py'
Jan 05 14:31:08 compute-0 sudo[146959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:08 compute-0 python3.9[146961]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:08 compute-0 sudo[146959]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:08 compute-0 sudo[147084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclxccvecmibgeczxuibxrpzttagfktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623467.7518597-554-165035400067351/AnsiballZ_copy.py'
Jan 05 14:31:08 compute-0 sudo[147084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:09 compute-0 python3.9[147086]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623467.7518597-554-165035400067351/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:09 compute-0 sudo[147084]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:09 compute-0 sudo[147236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syxavowdmutmufkwaoolmvivqvzmxxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623469.240368-554-157095519259732/AnsiballZ_stat.py'
Jan 05 14:31:09 compute-0 sudo[147236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:09 compute-0 python3.9[147238]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:09 compute-0 sudo[147236]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:10 compute-0 sudo[147361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocrrffwcahxibqqwfpnlouysrpkfybfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623469.240368-554-157095519259732/AnsiballZ_copy.py'
Jan 05 14:31:10 compute-0 sudo[147361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:10 compute-0 python3.9[147363]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623469.240368-554-157095519259732/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:10 compute-0 sudo[147361]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:10 compute-0 sudo[147513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmscggyitdpclvytkolokntxciayqvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623470.539877-554-5697223497274/AnsiballZ_stat.py'
Jan 05 14:31:10 compute-0 sudo[147513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:11 compute-0 podman[147515]: 2026-01-05 14:31:11.069076199 +0000 UTC m=+0.103615625 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 05 14:31:11 compute-0 python3.9[147516]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:11 compute-0 sudo[147513]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:11 compute-0 sudo[147663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhrfjdiuhzaiphiakecrubvmfnlroytb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623470.539877-554-5697223497274/AnsiballZ_copy.py'
Jan 05 14:31:11 compute-0 sudo[147663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:11 compute-0 python3.9[147665]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623470.539877-554-5697223497274/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:11 compute-0 sudo[147663]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:12 compute-0 sudo[147815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkqxujxxomjstucrkbaqflubltcrcuyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623472.0918288-554-38188879536681/AnsiballZ_stat.py'
Jan 05 14:31:12 compute-0 sudo[147815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:12 compute-0 python3.9[147817]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:12 compute-0 sudo[147815]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:13 compute-0 sudo[147940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvsfrnxajregpfdsyovgskhpynfxpjcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623472.0918288-554-38188879536681/AnsiballZ_copy.py'
Jan 05 14:31:13 compute-0 sudo[147940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:13 compute-0 python3.9[147942]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767623472.0918288-554-38188879536681/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:13 compute-0 sudo[147940]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:14 compute-0 sudo[148092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwpgdicwzthddcxtinhbenfdnscapzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623473.6863577-667-151189032761511/AnsiballZ_command.py'
Jan 05 14:31:14 compute-0 sudo[148092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:14 compute-0 python3.9[148094]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 05 14:31:14 compute-0 sudo[148092]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:14 compute-0 sudo[148245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcuxkkrhmxlfkumiczvfpemqnjyqygk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623474.638386-676-15796006352136/AnsiballZ_file.py'
Jan 05 14:31:14 compute-0 sudo[148245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:15 compute-0 python3.9[148247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:15 compute-0 sudo[148245]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:15 compute-0 sudo[148397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtmpwyuwncrubljeyxsorqxoadawsztq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623475.3648818-676-33352209579422/AnsiballZ_file.py'
Jan 05 14:31:15 compute-0 sudo[148397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:15 compute-0 python3.9[148399]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:16 compute-0 sudo[148397]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:16 compute-0 sudo[148549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uisdyyilseyeeklyiemshloftvkgucbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623476.1962075-676-119663316044772/AnsiballZ_file.py'
Jan 05 14:31:16 compute-0 sudo[148549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:16 compute-0 python3.9[148551]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:16 compute-0 sudo[148549]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:17 compute-0 podman[148675]: 2026-01-05 14:31:17.42651935 +0000 UTC m=+0.063874343 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:31:17 compute-0 sudo[148711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eudtvgfgwaryntalvupyfwqedzeiusus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623477.0193193-676-57288316128877/AnsiballZ_file.py'
Jan 05 14:31:17 compute-0 sudo[148711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:17 compute-0 python3.9[148719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:17 compute-0 sudo[148711]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:18 compute-0 sudo[148869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyplrvidyntawipjvlbnknowucihmbrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623477.8617237-676-18463189943592/AnsiballZ_file.py'
Jan 05 14:31:18 compute-0 sudo[148869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:18 compute-0 python3.9[148871]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:18 compute-0 sudo[148869]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:18 compute-0 sudo[149021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkidfvvpwobzltiizzndyotijrkhffva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623478.6211512-676-216900589175929/AnsiballZ_file.py'
Jan 05 14:31:18 compute-0 sudo[149021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:19 compute-0 python3.9[149023]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:19 compute-0 sudo[149021]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:19 compute-0 sudo[149173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnwnnpedrlsvlvgclnzflhetxuvkgcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623479.3936749-676-211960874235483/AnsiballZ_file.py'
Jan 05 14:31:19 compute-0 sudo[149173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:19 compute-0 python3.9[149175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:19 compute-0 sudo[149173]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:20 compute-0 sudo[149325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwsbxpxxulmahftwqltozvuvwolqhbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623480.0558257-676-106693559373573/AnsiballZ_file.py'
Jan 05 14:31:20 compute-0 sudo[149325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:20 compute-0 python3.9[149327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:20 compute-0 sudo[149325]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:20 compute-0 sudo[149477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqammeollbhloxbdjhujzkkkgplqmhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623480.6705303-676-150020448574903/AnsiballZ_file.py'
Jan 05 14:31:20 compute-0 sudo[149477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:21 compute-0 python3.9[149479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:21 compute-0 sudo[149477]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:21 compute-0 sudo[149629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcdfcrauzdmcffsgzeturjszkkjmgpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623481.3493493-676-109611488206619/AnsiballZ_file.py'
Jan 05 14:31:21 compute-0 sudo[149629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:21 compute-0 python3.9[149631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:21 compute-0 sudo[149629]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:22 compute-0 sudo[149781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zftahskkwjdyxidaxiqdcajxsaipnltk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623482.0896423-676-148811744367386/AnsiballZ_file.py'
Jan 05 14:31:22 compute-0 sudo[149781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:22 compute-0 python3.9[149783]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:22 compute-0 sudo[149781]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:23 compute-0 sudo[149933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjdmlmvtvngajvfbcvekcjgzbbimrdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623482.822824-676-116264404230799/AnsiballZ_file.py'
Jan 05 14:31:23 compute-0 sudo[149933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:23 compute-0 python3.9[149935]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:23 compute-0 sudo[149933]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:24 compute-0 sudo[150085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxcxiunmpmkasljctbflllgjjeujhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623483.9993033-676-138495865458336/AnsiballZ_file.py'
Jan 05 14:31:24 compute-0 sudo[150085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:24 compute-0 python3.9[150087]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:24 compute-0 sudo[150085]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:25 compute-0 sudo[150237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfnwpefcccmiejzxcoheyshqfpjycew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623484.7444062-676-62475823628430/AnsiballZ_file.py'
Jan 05 14:31:25 compute-0 sudo[150237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:25 compute-0 python3.9[150239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:25 compute-0 sudo[150237]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:25 compute-0 sudo[150389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxpbjkvqpfgdqontupvgxqudqcvuhjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623485.4979584-775-93419529951867/AnsiballZ_stat.py'
Jan 05 14:31:25 compute-0 sudo[150389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:26 compute-0 python3.9[150391]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:26 compute-0 sudo[150389]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:26 compute-0 sudo[150512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkywhgshujmbzvcfhrlypbfjdduxhpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623485.4979584-775-93419529951867/AnsiballZ_copy.py'
Jan 05 14:31:26 compute-0 sudo[150512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:26 compute-0 python3.9[150514]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623485.4979584-775-93419529951867/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:26 compute-0 sudo[150512]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:27 compute-0 sudo[150664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsladvhlsuxbefrwwpmiutokigolukkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623486.915403-775-182766193099287/AnsiballZ_stat.py'
Jan 05 14:31:27 compute-0 sudo[150664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:27 compute-0 python3.9[150666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:27 compute-0 sudo[150664]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:28 compute-0 sudo[150787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytjlnxalhbltshamawpbnadrqoyowjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623486.915403-775-182766193099287/AnsiballZ_copy.py'
Jan 05 14:31:28 compute-0 sudo[150787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:28 compute-0 python3.9[150789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623486.915403-775-182766193099287/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:28 compute-0 sudo[150787]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:28 compute-0 sudo[150939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nticcgygeiglflqxisvvyachkfolpmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623488.3650038-775-230015996646622/AnsiballZ_stat.py'
Jan 05 14:31:28 compute-0 sudo[150939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:28 compute-0 python3.9[150941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:28 compute-0 sudo[150939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:29 compute-0 sudo[151062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaftnzzqwdmmubyjyhiriseljfizuobh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623488.3650038-775-230015996646622/AnsiballZ_copy.py'
Jan 05 14:31:29 compute-0 sudo[151062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:29 compute-0 python3.9[151064]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623488.3650038-775-230015996646622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:29 compute-0 sudo[151062]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:30 compute-0 sudo[151214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtgwwrrteltmjpxjmnzedderxbtwnbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623489.7764564-775-48899567580844/AnsiballZ_stat.py'
Jan 05 14:31:30 compute-0 sudo[151214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:30 compute-0 python3.9[151216]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:30 compute-0 sudo[151214]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:30 compute-0 sudo[151337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wflsqjqhvefpyrorrbpgsgqbxfjkonrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623489.7764564-775-48899567580844/AnsiballZ_copy.py'
Jan 05 14:31:30 compute-0 sudo[151337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:31 compute-0 python3.9[151339]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623489.7764564-775-48899567580844/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:31 compute-0 sudo[151337]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:31 compute-0 sudo[151489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvcbuisyqitvoxfgmfqefxbsvxpvmqqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623491.2392018-775-39873979859614/AnsiballZ_stat.py'
Jan 05 14:31:31 compute-0 sudo[151489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:31 compute-0 python3.9[151491]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:31 compute-0 sudo[151489]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:32 compute-0 sudo[151612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dooekjmjjpwbmvsmtxycxhyuzjiqzngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623491.2392018-775-39873979859614/AnsiballZ_copy.py'
Jan 05 14:31:32 compute-0 sudo[151612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:32 compute-0 python3.9[151614]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623491.2392018-775-39873979859614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:32 compute-0 sudo[151612]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:33 compute-0 sudo[151764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knyhnakvjmnzrjnznfxyretblbhkdxgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623492.7446237-775-89176239703598/AnsiballZ_stat.py'
Jan 05 14:31:33 compute-0 sudo[151764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:33 compute-0 python3.9[151766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:33 compute-0 sudo[151764]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:33 compute-0 sudo[151887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zubmeaoiquyqvkuvjjwglkjgtsqyebpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623492.7446237-775-89176239703598/AnsiballZ_copy.py'
Jan 05 14:31:33 compute-0 sudo[151887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:34 compute-0 python3.9[151889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623492.7446237-775-89176239703598/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:34 compute-0 sudo[151887]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:34 compute-0 sudo[152039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkohcqapphfpwyfjiovjkxmuztmkxgxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623494.278285-775-71639113431284/AnsiballZ_stat.py'
Jan 05 14:31:34 compute-0 sudo[152039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:34 compute-0 python3.9[152041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:34 compute-0 sudo[152039]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:35 compute-0 sudo[152162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtktbfvsbwjinrxkrvqhgfeajickpgji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623494.278285-775-71639113431284/AnsiballZ_copy.py'
Jan 05 14:31:35 compute-0 sudo[152162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:35 compute-0 python3.9[152164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623494.278285-775-71639113431284/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:35 compute-0 sudo[152162]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:35 compute-0 sudo[152314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwmxvnariaerbyrqjzfxcoatougqmrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623495.5849936-775-206696719857503/AnsiballZ_stat.py'
Jan 05 14:31:35 compute-0 sudo[152314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:36 compute-0 python3.9[152316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:36 compute-0 sudo[152314]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:36 compute-0 sudo[152437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aterylptxklmssjwiwavberazqzoagtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623495.5849936-775-206696719857503/AnsiballZ_copy.py'
Jan 05 14:31:36 compute-0 sudo[152437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:36 compute-0 python3.9[152439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623495.5849936-775-206696719857503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:36 compute-0 sudo[152437]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:37 compute-0 sudo[152589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bflzziibgbodvcmubumvyqfjdcnzdfcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623496.9735146-775-197717677507615/AnsiballZ_stat.py'
Jan 05 14:31:37 compute-0 sudo[152589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:37 compute-0 python3.9[152591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:37 compute-0 sudo[152589]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:38 compute-0 sudo[152712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwqfwcmavzqhjtolwqscddptvcgymuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623496.9735146-775-197717677507615/AnsiballZ_copy.py'
Jan 05 14:31:38 compute-0 sudo[152712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:38 compute-0 python3.9[152714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623496.9735146-775-197717677507615/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:38 compute-0 sudo[152712]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:38 compute-0 sudo[152864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldynuqrstlgbhtpvntkgtnyivinzrlck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623498.5051534-775-96779633940092/AnsiballZ_stat.py'
Jan 05 14:31:38 compute-0 sudo[152864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:39 compute-0 python3.9[152866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:39 compute-0 sudo[152864]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:39 compute-0 sudo[152987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjfsehnmxtqebdynabyvyqaxurtknud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623498.5051534-775-96779633940092/AnsiballZ_copy.py'
Jan 05 14:31:39 compute-0 sudo[152987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:39 compute-0 python3.9[152989]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623498.5051534-775-96779633940092/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:39 compute-0 sudo[152987]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:40 compute-0 sudo[153139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexezevzvaplkuwlkqvfmwkrtdajbufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623500.0332105-775-64189204900720/AnsiballZ_stat.py'
Jan 05 14:31:40 compute-0 sudo[153139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:40 compute-0 python3.9[153141]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:40 compute-0 sudo[153139]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:41 compute-0 sudo[153262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saiyuxbrhualeuwoklleloxfyxuhcscj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623500.0332105-775-64189204900720/AnsiballZ_copy.py'
Jan 05 14:31:41 compute-0 sudo[153262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:41 compute-0 podman[153264]: 2026-01-05 14:31:41.260332415 +0000 UTC m=+0.133776094 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:31:41 compute-0 python3.9[153265]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623500.0332105-775-64189204900720/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:41 compute-0 sudo[153262]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:41 compute-0 sudo[153440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagksnuduqhmiskxritpclmqavnhqzei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623501.5271754-775-38536388118208/AnsiballZ_stat.py'
Jan 05 14:31:41 compute-0 sudo[153440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:42 compute-0 python3.9[153442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:42 compute-0 sudo[153440]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:42 compute-0 sudo[153563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgqdderustdbgozvolbmjtazezleiau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623501.5271754-775-38536388118208/AnsiballZ_copy.py'
Jan 05 14:31:42 compute-0 sudo[153563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:42 compute-0 python3.9[153565]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623501.5271754-775-38536388118208/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:42 compute-0 sudo[153563]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:43 compute-0 sudo[153715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfuetkzhclxskjhjcutttsumdicwscsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623502.9481888-775-60795183623020/AnsiballZ_stat.py'
Jan 05 14:31:43 compute-0 sudo[153715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:43 compute-0 python3.9[153717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:43 compute-0 sudo[153715]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:44 compute-0 sudo[153838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqaxsmikjmzgmmlqzwghwitjembehohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623502.9481888-775-60795183623020/AnsiballZ_copy.py'
Jan 05 14:31:44 compute-0 sudo[153838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:44 compute-0 python3.9[153840]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623502.9481888-775-60795183623020/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:44 compute-0 sudo[153838]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:31:44.776 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:31:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:31:44.777 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:31:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:31:44.777 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:31:44 compute-0 sudo[153990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvzcpogvujtylqljiqiwojbrqlauktm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623504.4523919-775-129433085028277/AnsiballZ_stat.py'
Jan 05 14:31:44 compute-0 sudo[153990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:45 compute-0 python3.9[153992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:31:45 compute-0 sudo[153990]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:45 compute-0 sudo[154113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsxyiiedfzjeqfrbomhajrhgtfezzmkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623504.4523919-775-129433085028277/AnsiballZ_copy.py'
Jan 05 14:31:45 compute-0 sudo[154113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:45 compute-0 python3.9[154115]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623504.4523919-775-129433085028277/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:45 compute-0 sudo[154113]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:46 compute-0 python3.9[154265]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:31:47 compute-0 sudo[154418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdlteixwijzxsgovmjerzkjfarpdfxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623506.9402485-981-182584065840891/AnsiballZ_seboolean.py'
Jan 05 14:31:47 compute-0 sudo[154418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:47 compute-0 podman[154419]: 2026-01-05 14:31:47.615502125 +0000 UTC m=+0.086602755 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 14:31:47 compute-0 python3.9[154421]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 05 14:31:48 compute-0 sudo[154418]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:49 compute-0 sudo[154590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvzskzicncskeppxaisiflttxcfomzgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623509.179008-989-252477971683072/AnsiballZ_copy.py'
Jan 05 14:31:49 compute-0 dbus-broker-launch[771]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Jan 05 14:31:49 compute-0 sudo[154590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:49 compute-0 python3.9[154592]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:49 compute-0 sudo[154590]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:50 compute-0 sudo[154742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuarpephrnepndgicpyypuwbjfppvotl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623509.916401-989-221722380877705/AnsiballZ_copy.py'
Jan 05 14:31:50 compute-0 sudo[154742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:50 compute-0 python3.9[154744]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:50 compute-0 sudo[154742]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:51 compute-0 sudo[154894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gisxnykxyfufeikcfbrykmpirbhdrnpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623510.703747-989-127594978868348/AnsiballZ_copy.py'
Jan 05 14:31:51 compute-0 sudo[154894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:51 compute-0 python3.9[154896]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:51 compute-0 sudo[154894]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:51 compute-0 sudo[155046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxfmebhmumzhhhsddjzjblnwhjlobcbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623511.5458248-989-69713121484774/AnsiballZ_copy.py'
Jan 05 14:31:51 compute-0 sudo[155046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:52 compute-0 python3.9[155048]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:52 compute-0 sudo[155046]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:52 compute-0 sudo[155198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieinogqldcaljszwrgkkoutpjgugosiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623512.3346045-989-253801951506468/AnsiballZ_copy.py'
Jan 05 14:31:52 compute-0 sudo[155198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:52 compute-0 python3.9[155200]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:52 compute-0 sudo[155198]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:53 compute-0 sudo[155350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmeitpgydodtiwwpndxgtuqfpulijuxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623513.111556-1025-156836919669454/AnsiballZ_copy.py'
Jan 05 14:31:53 compute-0 sudo[155350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:53 compute-0 python3.9[155352]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:53 compute-0 sudo[155350]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:54 compute-0 sudo[155502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otsjgcyyslzxmmakflmoctbqsbghhgqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623513.8113506-1025-38499341634829/AnsiballZ_copy.py'
Jan 05 14:31:54 compute-0 sudo[155502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:54 compute-0 python3.9[155504]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:54 compute-0 sudo[155502]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:54 compute-0 sudo[155654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczwsczeiukbgcvhwleyhxycxdfisfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623514.5948918-1025-134692801309047/AnsiballZ_copy.py'
Jan 05 14:31:54 compute-0 sudo[155654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:55 compute-0 python3.9[155656]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:55 compute-0 sudo[155654]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:55 compute-0 sudo[155806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iosbwfrepywpsxbhbzjexfdeskdilbef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623515.338581-1025-169355342229534/AnsiballZ_copy.py'
Jan 05 14:31:55 compute-0 sudo[155806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:55 compute-0 python3.9[155808]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:55 compute-0 sudo[155806]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:56 compute-0 sudo[155958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzjofkccdtsrgfoknwjmlozosxelfupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623516.1103294-1025-220704779629727/AnsiballZ_copy.py'
Jan 05 14:31:56 compute-0 sudo[155958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:56 compute-0 python3.9[155960]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:31:56 compute-0 sudo[155958]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:57 compute-0 sudo[156110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reyjsoxakphmiwgsmugdptwlwicckyaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623516.975552-1061-210503890739742/AnsiballZ_systemd.py'
Jan 05 14:31:57 compute-0 sudo[156110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:57 compute-0 python3.9[156112]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:31:57 compute-0 systemd[1]: Reloading.
Jan 05 14:31:57 compute-0 systemd-rc-local-generator[156133]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:31:57 compute-0 systemd-sysv-generator[156137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:31:57 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 05 14:31:57 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 05 14:31:57 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 05 14:31:57 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 05 14:31:58 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 05 14:31:58 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 05 14:31:58 compute-0 sudo[156110]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:58 compute-0 sudo[156302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwhanhhwjazglkdviptplsznqmxharhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623518.3132298-1061-109266784855843/AnsiballZ_systemd.py'
Jan 05 14:31:58 compute-0 sudo[156302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:31:59 compute-0 python3.9[156304]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:31:59 compute-0 systemd[1]: Reloading.
Jan 05 14:31:59 compute-0 systemd-sysv-generator[156334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:31:59 compute-0 systemd-rc-local-generator[156328]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:31:59 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 05 14:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 05 14:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 05 14:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 05 14:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 05 14:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 05 14:31:59 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 05 14:31:59 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 05 14:31:59 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 05 14:31:59 compute-0 sudo[156302]: pam_unix(sudo:session): session closed for user root
Jan 05 14:31:59 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 05 14:31:59 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 05 14:31:59 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 05 14:32:00 compute-0 sudo[156526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcpbbevtmekiwjhwzakohpcdtfcvmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623519.6397645-1061-70647120946013/AnsiballZ_systemd.py'
Jan 05 14:32:00 compute-0 sudo[156526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:00 compute-0 python3.9[156528]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:32:00 compute-0 systemd[1]: Reloading.
Jan 05 14:32:00 compute-0 systemd-rc-local-generator[156559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:00 compute-0 systemd-sysv-generator[156562]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 05 14:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 05 14:32:00 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 05 14:32:00 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 05 14:32:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 14:32:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 14:32:00 compute-0 sudo[156526]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:00 compute-0 setroubleshoot[156340]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l cd725c84-5a2e-402f-ab4e-e67f48a260a1
Jan 05 14:32:00 compute-0 setroubleshoot[156340]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 05 14:32:00 compute-0 setroubleshoot[156340]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l cd725c84-5a2e-402f-ab4e-e67f48a260a1
Jan 05 14:32:00 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:32:00 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:32:00 compute-0 setroubleshoot[156340]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 05 14:32:01 compute-0 anacron[30828]: Job `cron.daily' started
Jan 05 14:32:01 compute-0 anacron[30828]: Job `cron.daily' terminated
Jan 05 14:32:01 compute-0 sudo[156744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpwchfkhzrxobrvbojmddktphehxsmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623520.9872887-1061-28942561251242/AnsiballZ_systemd.py'
Jan 05 14:32:01 compute-0 sudo[156744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:01 compute-0 python3.9[156746]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:32:01 compute-0 systemd[1]: Reloading.
Jan 05 14:32:01 compute-0 systemd-rc-local-generator[156774]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:01 compute-0 systemd-sysv-generator[156777]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:02 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 05 14:32:02 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 05 14:32:02 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 05 14:32:02 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 05 14:32:02 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 05 14:32:02 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 05 14:32:02 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 05 14:32:02 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 05 14:32:02 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 05 14:32:02 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 05 14:32:02 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 05 14:32:02 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 05 14:32:02 compute-0 sudo[156744]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:02 compute-0 sudo[156959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hctnzldtmxjnfhmnqocyjtfixdtwuxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623522.3419678-1061-71467317842858/AnsiballZ_systemd.py'
Jan 05 14:32:02 compute-0 sudo[156959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:03 compute-0 python3.9[156961]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:32:03 compute-0 systemd[1]: Reloading.
Jan 05 14:32:03 compute-0 systemd-rc-local-generator[156990]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:03 compute-0 systemd-sysv-generator[156994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:03 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 05 14:32:03 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 05 14:32:03 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 05 14:32:03 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 05 14:32:03 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 05 14:32:03 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 05 14:32:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 05 14:32:03 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 05 14:32:03 compute-0 sudo[156959]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:04 compute-0 sudo[157171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxfxemjvrdvqujemelnhbobhdcktcsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623523.825864-1098-84895616989115/AnsiballZ_file.py'
Jan 05 14:32:04 compute-0 sudo[157171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:04 compute-0 python3.9[157173]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:04 compute-0 sudo[157171]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:05 compute-0 sudo[157323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-louljktwianhjvekldwolnztlasvyyog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623524.7399461-1106-219874828609527/AnsiballZ_find.py'
Jan 05 14:32:05 compute-0 sudo[157323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:05 compute-0 python3.9[157325]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:32:05 compute-0 sudo[157323]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:06 compute-0 sudo[157475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bartoeehfezhococciftaauecddwdpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623525.7846887-1120-3119743367142/AnsiballZ_stat.py'
Jan 05 14:32:06 compute-0 sudo[157475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:06 compute-0 python3.9[157477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:06 compute-0 sudo[157475]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:06 compute-0 sudo[157598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggrlwgdxqtaqhobxztihxcxacmlztlwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623525.7846887-1120-3119743367142/AnsiballZ_copy.py'
Jan 05 14:32:06 compute-0 sudo[157598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:07 compute-0 python3.9[157600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623525.7846887-1120-3119743367142/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:07 compute-0 sudo[157598]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:07 compute-0 sudo[157750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldxlhmfacelnpqzblnjldunxnxipefqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623527.4943748-1136-255140544967295/AnsiballZ_file.py'
Jan 05 14:32:07 compute-0 sudo[157750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:08 compute-0 python3.9[157752]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:08 compute-0 sudo[157750]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:08 compute-0 sudo[157902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljkqxojuxffyuxounmiwbcezzoittgoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623528.2904844-1144-29974317468912/AnsiballZ_stat.py'
Jan 05 14:32:08 compute-0 sudo[157902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:08 compute-0 python3.9[157904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:08 compute-0 sudo[157902]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:09 compute-0 sudo[157980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvxjqgxsbrydqxkboslqncurfwkmjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623528.2904844-1144-29974317468912/AnsiballZ_file.py'
Jan 05 14:32:09 compute-0 sudo[157980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:09 compute-0 python3.9[157982]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:09 compute-0 sudo[157980]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:10 compute-0 sudo[158132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lswxgrccnxbittpkilfjkvlkdchxmimy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623529.6277184-1156-2747463806705/AnsiballZ_stat.py'
Jan 05 14:32:10 compute-0 sudo[158132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:10 compute-0 python3.9[158134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:10 compute-0 sudo[158132]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:10 compute-0 sudo[158210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpedgdbnkugoiobhfpscklirwrbaenmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623529.6277184-1156-2747463806705/AnsiballZ_file.py'
Jan 05 14:32:10 compute-0 sudo[158210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:10 compute-0 python3.9[158212]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3axcux1b recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:10 compute-0 sudo[158210]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:11 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 05 14:32:11 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.114s CPU time.
Jan 05 14:32:11 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 05 14:32:11 compute-0 sudo[158363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqquulbedtlkjpfilukslntkjntmgfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623530.9991415-1168-156038872203489/AnsiballZ_stat.py'
Jan 05 14:32:11 compute-0 sudo[158363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:11 compute-0 podman[158365]: 2026-01-05 14:32:11.459853089 +0000 UTC m=+0.106852651 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 14:32:11 compute-0 python3.9[158366]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:11 compute-0 sudo[158363]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:11 compute-0 sudo[158468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcilitgtqarivhmuhgzebczonxgwcofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623530.9991415-1168-156038872203489/AnsiballZ_file.py'
Jan 05 14:32:11 compute-0 sudo[158468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:12 compute-0 python3.9[158470]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:12 compute-0 sudo[158468]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:12 compute-0 sudo[158620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cygmwpalapcplyngjmsnacfrrehhozje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623532.4338727-1181-10245069474564/AnsiballZ_command.py'
Jan 05 14:32:12 compute-0 sudo[158620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:13 compute-0 python3.9[158622]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:13 compute-0 sudo[158620]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:13 compute-0 sudo[158773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blrobeiedftwcsgnaggryxucaeovjzuo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623533.3076997-1189-57939484227079/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 14:32:13 compute-0 sudo[158773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:14 compute-0 python3[158775]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 14:32:14 compute-0 sudo[158773]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:14 compute-0 sudo[158925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgqlrbklzuuvoiyhswajilsyuovlfrkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623534.3052073-1197-74752485993700/AnsiballZ_stat.py'
Jan 05 14:32:14 compute-0 sudo[158925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:14 compute-0 python3.9[158927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:14 compute-0 sudo[158925]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:15 compute-0 sudo[159003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfvazscugspwttcxxrjtkqxqwoaiuoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623534.3052073-1197-74752485993700/AnsiballZ_file.py'
Jan 05 14:32:15 compute-0 sudo[159003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:15 compute-0 python3.9[159005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:15 compute-0 sudo[159003]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:15 compute-0 sudo[159155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyitqcazrnlazpycfjkcmyrubokqvxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623535.5680587-1209-5742697222761/AnsiballZ_stat.py'
Jan 05 14:32:15 compute-0 sudo[159155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:16 compute-0 python3.9[159157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:16 compute-0 sudo[159155]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:16 compute-0 sudo[159233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhbflxzavlszrrbqhbpvloohtyewiab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623535.5680587-1209-5742697222761/AnsiballZ_file.py'
Jan 05 14:32:16 compute-0 sudo[159233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:16 compute-0 python3.9[159235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:16 compute-0 sudo[159233]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:17 compute-0 sudo[159385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqpvalewgmwqfithcyzgzfyxclyluxjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623536.9058201-1221-198880653243339/AnsiballZ_stat.py'
Jan 05 14:32:17 compute-0 sudo[159385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:17 compute-0 python3.9[159387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:17 compute-0 sudo[159385]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:17 compute-0 sudo[159475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwkxepbkskyqocgejwmoohhablwcqqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623536.9058201-1221-198880653243339/AnsiballZ_file.py'
Jan 05 14:32:17 compute-0 sudo[159475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:17 compute-0 podman[159437]: 2026-01-05 14:32:17.833642581 +0000 UTC m=+0.047713434 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 05 14:32:17 compute-0 python3.9[159482]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:18 compute-0 sudo[159475]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:18 compute-0 sudo[159632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zryagnkgcggyroccskxfphqtdvrguzpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623538.1785643-1233-105477656146924/AnsiballZ_stat.py'
Jan 05 14:32:18 compute-0 sudo[159632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:18 compute-0 python3.9[159634]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:18 compute-0 sudo[159632]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:19 compute-0 sudo[159710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqaulwhypklorjynfqrxvbghbaganuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623538.1785643-1233-105477656146924/AnsiballZ_file.py'
Jan 05 14:32:19 compute-0 sudo[159710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:19 compute-0 python3.9[159712]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:19 compute-0 sudo[159710]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:19 compute-0 sudo[159862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vftgoygzobyqneotfwzkdugkdupdeayo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623539.4918633-1245-180586840034234/AnsiballZ_stat.py'
Jan 05 14:32:19 compute-0 sudo[159862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:20 compute-0 python3.9[159864]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:20 compute-0 sudo[159862]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:20 compute-0 sudo[159987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gekgaxvccufvfejoovupabsfnzbcmtzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623539.4918633-1245-180586840034234/AnsiballZ_copy.py'
Jan 05 14:32:20 compute-0 sudo[159987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:20 compute-0 python3.9[159989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623539.4918633-1245-180586840034234/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:20 compute-0 sudo[159987]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:21 compute-0 sudo[160139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodyrpufguqoynqctdxzjppseesliwwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623541.1722987-1260-55885357432041/AnsiballZ_file.py'
Jan 05 14:32:21 compute-0 sudo[160139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:21 compute-0 python3.9[160141]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:21 compute-0 sudo[160139]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:22 compute-0 sudo[160291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvlimpioxlxcqwffbcjzbgkucptdormr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623542.054463-1268-196698038873766/AnsiballZ_command.py'
Jan 05 14:32:22 compute-0 sudo[160291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:22 compute-0 python3.9[160293]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:22 compute-0 sudo[160291]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:23 compute-0 sudo[160446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdsoucpbutotqrzkakeyubqdqkmlnfqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623542.9138608-1276-103477976512232/AnsiballZ_blockinfile.py'
Jan 05 14:32:23 compute-0 sudo[160446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:23 compute-0 python3.9[160448]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:23 compute-0 sudo[160446]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:24 compute-0 sudo[160598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwhtmdtfjhxhwerrdgfkwldsymswifrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623544.1647444-1285-30263496824265/AnsiballZ_command.py'
Jan 05 14:32:24 compute-0 sudo[160598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:24 compute-0 python3.9[160600]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:24 compute-0 sudo[160598]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:25 compute-0 sudo[160751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joxddlebnqxrscqyixqsogmarjsxscpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623545.0371153-1293-103644674526232/AnsiballZ_stat.py'
Jan 05 14:32:25 compute-0 sudo[160751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:25 compute-0 python3.9[160753]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:32:25 compute-0 sudo[160751]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:26 compute-0 sudo[160905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqulgtwvcodcejhmsmncupytvtrnftlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623545.8125045-1301-268065928827750/AnsiballZ_command.py'
Jan 05 14:32:26 compute-0 sudo[160905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:26 compute-0 python3.9[160907]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:26 compute-0 sudo[160905]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:26 compute-0 sudo[161060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzhvebjtjfwapfdjpnnwuwnbxprxhwji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623546.6819303-1309-164583520242323/AnsiballZ_file.py'
Jan 05 14:32:26 compute-0 sudo[161060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:27 compute-0 python3.9[161062]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:27 compute-0 sudo[161060]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:27 compute-0 sudo[161212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkbvnjwyacntnvwacqaatifmmoosabua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623547.4630415-1317-218367277222163/AnsiballZ_stat.py'
Jan 05 14:32:27 compute-0 sudo[161212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:27 compute-0 python3.9[161214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:27 compute-0 sudo[161212]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:28 compute-0 sudo[161335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggeryzgfjetaeodlnvdiqempruvbrctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623547.4630415-1317-218367277222163/AnsiballZ_copy.py'
Jan 05 14:32:28 compute-0 sudo[161335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:28 compute-0 python3.9[161337]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623547.4630415-1317-218367277222163/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:28 compute-0 sudo[161335]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:29 compute-0 sudo[161487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcutgagqowyrlicfojiaeixexenbfhdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623548.8293293-1332-173874748699124/AnsiballZ_stat.py'
Jan 05 14:32:29 compute-0 sudo[161487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:29 compute-0 python3.9[161489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:29 compute-0 sudo[161487]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:29 compute-0 sudo[161610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijtyacjzgrhjxhuszmkbhzxeefvkdybw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623548.8293293-1332-173874748699124/AnsiballZ_copy.py'
Jan 05 14:32:29 compute-0 sudo[161610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:30 compute-0 python3.9[161612]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623548.8293293-1332-173874748699124/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:30 compute-0 sudo[161610]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:30 compute-0 sudo[161762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqmqwxaddzvswqnkpdjusjbomrpszcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623550.3988533-1347-252143995158472/AnsiballZ_stat.py'
Jan 05 14:32:30 compute-0 sudo[161762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:30 compute-0 python3.9[161764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:32:30 compute-0 sudo[161762]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:31 compute-0 sudo[161885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcbkqvgjqqjguzcrhvdojsoybwidqabd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623550.3988533-1347-252143995158472/AnsiballZ_copy.py'
Jan 05 14:32:31 compute-0 sudo[161885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:31 compute-0 python3.9[161887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623550.3988533-1347-252143995158472/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:32:31 compute-0 sudo[161885]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:32 compute-0 sudo[162037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mseiqambqmwnboapmzxuxqvwouajvxns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623551.9193273-1362-137548662534689/AnsiballZ_systemd.py'
Jan 05 14:32:32 compute-0 sudo[162037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:32 compute-0 python3.9[162039]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:32:32 compute-0 systemd[1]: Reloading.
Jan 05 14:32:32 compute-0 systemd-rc-local-generator[162068]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:32 compute-0 systemd-sysv-generator[162072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:32 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 05 14:32:32 compute-0 sudo[162037]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:33 compute-0 sudo[162229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arawxafcgnumekwtfnxtpexucmfmawqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623553.2615924-1370-65535876294338/AnsiballZ_systemd.py'
Jan 05 14:32:33 compute-0 sudo[162229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:33 compute-0 python3.9[162231]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 05 14:32:33 compute-0 systemd[1]: Reloading.
Jan 05 14:32:34 compute-0 systemd-rc-local-generator[162259]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:34 compute-0 systemd-sysv-generator[162262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:34 compute-0 systemd[1]: Reloading.
Jan 05 14:32:34 compute-0 systemd-rc-local-generator[162296]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:32:34 compute-0 systemd-sysv-generator[162300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:32:34 compute-0 sudo[162229]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:35 compute-0 sshd-session[107768]: Connection closed by 192.168.122.30 port 33152
Jan 05 14:32:35 compute-0 sshd-session[107765]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:32:35 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 05 14:32:35 compute-0 systemd[1]: session-23.scope: Consumed 3min 54.338s CPU time.
Jan 05 14:32:35 compute-0 systemd-logind[795]: Session 23 logged out. Waiting for processes to exit.
Jan 05 14:32:35 compute-0 systemd-logind[795]: Removed session 23.
Jan 05 14:32:41 compute-0 podman[162327]: 2026-01-05 14:32:41.654233341 +0000 UTC m=+0.135222329 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 05 14:32:41 compute-0 sshd-session[162354]: Accepted publickey for zuul from 192.168.122.30 port 51062 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:32:41 compute-0 systemd-logind[795]: New session 24 of user zuul.
Jan 05 14:32:41 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 05 14:32:41 compute-0 sshd-session[162354]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:32:42 compute-0 python3.9[162507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:32:44 compute-0 python3.9[162661]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:32:44 compute-0 network[162678]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:32:44 compute-0 network[162679]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:32:44 compute-0 network[162680]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:32:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:32:44.780 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:32:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:32:44.781 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:32:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:32:44.781 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:32:47 compute-0 podman[162775]: 2026-01-05 14:32:47.99001064 +0000 UTC m=+0.088238575 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 14:32:49 compute-0 sudo[162969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcgqnrpfsijifxhiyfaakeuvvfqxozbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623568.9477162-47-178734503416394/AnsiballZ_setup.py'
Jan 05 14:32:49 compute-0 sudo[162969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:49 compute-0 python3.9[162971]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:32:50 compute-0 sudo[162969]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:50 compute-0 sudo[163053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skljhnybcvkejzvumrizctjssixwptkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623568.9477162-47-178734503416394/AnsiballZ_dnf.py'
Jan 05 14:32:50 compute-0 sudo[163053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:50 compute-0 python3.9[163055]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:32:55 compute-0 sudo[163053]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:56 compute-0 sudo[163206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtbopaugmybkhoobrislotyufqoixdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623575.9964175-59-271897212434383/AnsiballZ_stat.py'
Jan 05 14:32:56 compute-0 sudo[163206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:56 compute-0 python3.9[163208]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:32:56 compute-0 sudo[163206]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:57 compute-0 sudo[163358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnnjlfhldduvcttludyavimpxvhvivhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623577.0951052-69-214412203104642/AnsiballZ_command.py'
Jan 05 14:32:57 compute-0 sudo[163358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:57 compute-0 python3.9[163360]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:57 compute-0 sudo[163358]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:58 compute-0 sudo[163511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pofhogqphvvxgxujwcmvdfxthhyqzpep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623578.196324-79-257513863999068/AnsiballZ_stat.py'
Jan 05 14:32:58 compute-0 sudo[163511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:58 compute-0 python3.9[163513]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:32:58 compute-0 sudo[163511]: pam_unix(sudo:session): session closed for user root
Jan 05 14:32:59 compute-0 sudo[163663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wedvzstpjtbczqpbkpovdekprxbjrwgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623579.0100555-87-62698283666157/AnsiballZ_command.py'
Jan 05 14:32:59 compute-0 sudo[163663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:32:59 compute-0 python3.9[163665]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:32:59 compute-0 sudo[163663]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:00 compute-0 sudo[163816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybqwsugmgjhnbgdnpccvbanlemlsndy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623579.714337-95-209245618727791/AnsiballZ_stat.py'
Jan 05 14:33:00 compute-0 sudo[163816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:00 compute-0 python3.9[163818]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:33:00 compute-0 sudo[163816]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:00 compute-0 sudo[163939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psaqtooxiscnrabqhthgzoohdoxvdjot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623579.714337-95-209245618727791/AnsiballZ_copy.py'
Jan 05 14:33:00 compute-0 sudo[163939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:01 compute-0 python3.9[163941]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623579.714337-95-209245618727791/.source.iscsi _original_basename=.rdl2hmds follow=False checksum=3be87a0582fbf480e518eaeeb79c7e21fc4a0e14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:01 compute-0 sudo[163939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:01 compute-0 sudo[164091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzqfgmutcnusjpiljjnmkkwlrrnkgixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623581.3973792-110-45663560059502/AnsiballZ_file.py'
Jan 05 14:33:01 compute-0 sudo[164091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:02 compute-0 python3.9[164093]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:02 compute-0 sudo[164091]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:02 compute-0 sudo[164243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuweuvghvwucsfpsrbrgdsnplxwpmccs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623582.3676674-118-109201991905896/AnsiballZ_lineinfile.py'
Jan 05 14:33:02 compute-0 sudo[164243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:03 compute-0 python3.9[164245]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:03 compute-0 sudo[164243]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:04 compute-0 sudo[164395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkkudpjvqmualybnalghjplzzrbobxqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623583.3434224-127-201690875333988/AnsiballZ_systemd_service.py'
Jan 05 14:33:04 compute-0 sudo[164395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:04 compute-0 python3.9[164397]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:33:04 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 05 14:33:04 compute-0 sudo[164395]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:05 compute-0 sudo[164551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lujadjpsqxnuyrfjxjhkxcqrzvadfbqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623584.8073013-135-65690012763920/AnsiballZ_systemd_service.py'
Jan 05 14:33:05 compute-0 sudo[164551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:05 compute-0 python3.9[164553]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:33:05 compute-0 systemd[1]: Reloading.
Jan 05 14:33:05 compute-0 systemd-rc-local-generator[164583]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:05 compute-0 systemd-sysv-generator[164586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:05 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 05 14:33:05 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 05 14:33:06 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 05 14:33:06 compute-0 systemd[1]: Started Open-iSCSI.
Jan 05 14:33:06 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 05 14:33:06 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 05 14:33:06 compute-0 sudo[164551]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:07 compute-0 python3.9[164753]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:33:07 compute-0 network[164771]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:33:07 compute-0 network[164772]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:33:07 compute-0 network[164773]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:33:07 compute-0 sshd-session[164750]: Invalid user solv from 165.22.168.95 port 55298
Jan 05 14:33:07 compute-0 sshd-session[164750]: Connection closed by invalid user solv 165.22.168.95 port 55298 [preauth]
Jan 05 14:33:11 compute-0 podman[164878]: 2026-01-05 14:33:11.848132762 +0000 UTC m=+0.142515281 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:33:13 compute-0 sudo[165069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwwlfjhfqosjdlsfddkkerqdmafaxjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623593.5129502-158-263900576747388/AnsiballZ_dnf.py'
Jan 05 14:33:13 compute-0 sudo[165069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:14 compute-0 python3.9[165071]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:33:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:33:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:33:16 compute-0 systemd[1]: Reloading.
Jan 05 14:33:16 compute-0 systemd-rc-local-generator[165108]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:16 compute-0 systemd-sysv-generator[165116]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:33:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:33:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:33:17 compute-0 systemd[1]: run-r3d7657b47a664af5a5538349e1cb2bfe.service: Deactivated successfully.
Jan 05 14:33:17 compute-0 sudo[165069]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:18 compute-0 sudo[165398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixaqkvarcdspiuxolzpymhjldlbetiym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623597.7825503-167-49034634461968/AnsiballZ_file.py'
Jan 05 14:33:18 compute-0 sudo[165398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:18 compute-0 podman[165361]: 2026-01-05 14:33:18.171606788 +0000 UTC m=+0.097167471 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 14:33:18 compute-0 python3.9[165406]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 05 14:33:18 compute-0 sudo[165398]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:19 compute-0 sudo[165558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkjodkscmkhdrogpnpvhenksdbqbxhcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623598.6083317-175-269260161829282/AnsiballZ_modprobe.py'
Jan 05 14:33:19 compute-0 sudo[165558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:19 compute-0 python3.9[165560]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 05 14:33:19 compute-0 sudo[165558]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:20 compute-0 sudo[165714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysphucgubyhyfgyzezpraifbfaihxebx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623599.6043434-183-261048325027283/AnsiballZ_stat.py'
Jan 05 14:33:20 compute-0 sudo[165714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:20 compute-0 python3.9[165716]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:33:20 compute-0 sudo[165714]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:20 compute-0 sudo[165837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suepbnkahtnzjsuxgasritrvadlkqgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623599.6043434-183-261048325027283/AnsiballZ_copy.py'
Jan 05 14:33:20 compute-0 sudo[165837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:20 compute-0 python3.9[165839]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623599.6043434-183-261048325027283/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:20 compute-0 sudo[165837]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:21 compute-0 sudo[165989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sppjytoqugneyvemfnqlbozemcjdtifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623601.0651128-199-12574747034939/AnsiballZ_lineinfile.py'
Jan 05 14:33:21 compute-0 sudo[165989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:21 compute-0 python3.9[165991]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:21 compute-0 sudo[165989]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:22 compute-0 sudo[166141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpnfngxrzkznqybmwpocxnyrrivpwkcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623601.9727044-207-56462271687976/AnsiballZ_systemd.py'
Jan 05 14:33:22 compute-0 sudo[166141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:22 compute-0 python3.9[166143]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:33:23 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 05 14:33:23 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 05 14:33:23 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 05 14:33:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 14:33:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 14:33:23 compute-0 sudo[166141]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:23 compute-0 sudo[166297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezcikiltgpjikbhkjcstvwkiljmfqght ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623603.3154557-215-68631202897509/AnsiballZ_command.py'
Jan 05 14:33:23 compute-0 sudo[166297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:23 compute-0 python3.9[166299]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:33:23 compute-0 sudo[166297]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:24 compute-0 sudo[166450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbafiqmyswngjvbnaeoyzytxnvokraxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623604.4798172-225-3825011685500/AnsiballZ_stat.py'
Jan 05 14:33:24 compute-0 sudo[166450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:25 compute-0 python3.9[166452]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:33:25 compute-0 sudo[166450]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:25 compute-0 sudo[166602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksszmstzssptdojhhlfwqkgggotwmxhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623605.3078773-234-272000709112513/AnsiballZ_stat.py'
Jan 05 14:33:25 compute-0 sudo[166602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:25 compute-0 python3.9[166604]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:33:25 compute-0 sudo[166602]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:26 compute-0 sudo[166725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctuqetksxbprdcwpwdaddawxvluftpyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623605.3078773-234-272000709112513/AnsiballZ_copy.py'
Jan 05 14:33:26 compute-0 sudo[166725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:26 compute-0 python3.9[166727]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623605.3078773-234-272000709112513/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:26 compute-0 sudo[166725]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:27 compute-0 sudo[166877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krwpujtackgxubhjzzghcwczjkkzhurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623606.8103468-249-37144451655427/AnsiballZ_command.py'
Jan 05 14:33:27 compute-0 sudo[166877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:27 compute-0 python3.9[166879]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:33:27 compute-0 sudo[166877]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:28 compute-0 sudo[167030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igkbzjrrbpgiyfgunhjdvyxvzkzhavxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623607.791508-257-179257581023795/AnsiballZ_lineinfile.py'
Jan 05 14:33:28 compute-0 sudo[167030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:28 compute-0 python3.9[167032]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:28 compute-0 sudo[167030]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:29 compute-0 sudo[167182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjuarhqjmgkqieopvecykimyknttcwwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623608.5658402-265-184255701906720/AnsiballZ_replace.py'
Jan 05 14:33:29 compute-0 sudo[167182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:29 compute-0 python3.9[167184]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:29 compute-0 sudo[167182]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:29 compute-0 sudo[167334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egujkgfrwdqouvoeejkzyiyoehmvjyec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623609.5119102-273-210215518429968/AnsiballZ_replace.py'
Jan 05 14:33:29 compute-0 sudo[167334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:30 compute-0 python3.9[167336]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:30 compute-0 sudo[167334]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:30 compute-0 sudo[167486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccasyncnmuoaxmymfjfcmtttgjrnlpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623610.275672-282-165059473078288/AnsiballZ_lineinfile.py'
Jan 05 14:33:30 compute-0 sudo[167486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:30 compute-0 python3.9[167488]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:30 compute-0 sudo[167486]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:31 compute-0 sudo[167638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshrdaqrjotcbpruuewdkhmeljbhtgpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623611.1076212-282-262256524706575/AnsiballZ_lineinfile.py'
Jan 05 14:33:31 compute-0 sudo[167638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:31 compute-0 python3.9[167640]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:31 compute-0 sudo[167638]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:32 compute-0 sudo[167790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwzqbdptslkoexlfwogjoslzbbfwpwtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623611.8452172-282-219847206300891/AnsiballZ_lineinfile.py'
Jan 05 14:33:32 compute-0 sudo[167790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:32 compute-0 python3.9[167792]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:32 compute-0 sudo[167790]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:32 compute-0 sudo[167942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rchcdrclunvckpivqgytijkhynhwrrrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623612.529623-282-140008698234887/AnsiballZ_lineinfile.py'
Jan 05 14:33:32 compute-0 sudo[167942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:33 compute-0 python3.9[167944]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:33 compute-0 sudo[167942]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:33 compute-0 sudo[168094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfivvviplwaiyvoechvejuuyysgfstyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623613.31399-311-135349798536587/AnsiballZ_stat.py'
Jan 05 14:33:33 compute-0 sudo[168094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:33 compute-0 python3.9[168096]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:33:33 compute-0 sudo[168094]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:34 compute-0 sudo[168248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odbtjdewcvnzhvqezfnqthrmhyfpvkuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623614.1384838-319-139688414291708/AnsiballZ_command.py'
Jan 05 14:33:34 compute-0 sudo[168248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:34 compute-0 python3.9[168250]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:33:34 compute-0 sudo[168248]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:35 compute-0 sudo[168401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnabuxyathvvaubayroyjcsqulooevru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623614.977255-328-215736574141385/AnsiballZ_systemd_service.py'
Jan 05 14:33:35 compute-0 sudo[168401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:35 compute-0 python3.9[168403]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:33:35 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 05 14:33:36 compute-0 sudo[168401]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:36 compute-0 sudo[168557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blsubspgwbesfafaeipjtsyrcmmldokc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623616.2508848-336-153163745844244/AnsiballZ_systemd_service.py'
Jan 05 14:33:36 compute-0 sudo[168557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:36 compute-0 python3.9[168559]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:33:37 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 05 14:33:37 compute-0 udevadm[168564]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 05 14:33:37 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 05 14:33:37 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 05 14:33:37 compute-0 multipathd[168568]: --------start up--------
Jan 05 14:33:37 compute-0 multipathd[168568]: read /etc/multipath.conf
Jan 05 14:33:37 compute-0 multipathd[168568]: path checkers start up
Jan 05 14:33:37 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 05 14:33:37 compute-0 sudo[168557]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:38 compute-0 sudo[168725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyysagztuthbgzmmlbhwsbozqocjdzbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623617.6745937-348-263523875412979/AnsiballZ_file.py'
Jan 05 14:33:38 compute-0 sudo[168725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:38 compute-0 python3.9[168727]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 05 14:33:38 compute-0 sudo[168725]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:38 compute-0 sudo[168877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejhoaptszfdkffmdoifbuileswsijyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623618.5166855-356-184933559387150/AnsiballZ_modprobe.py'
Jan 05 14:33:38 compute-0 sudo[168877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:39 compute-0 python3.9[168879]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 05 14:33:39 compute-0 kernel: Key type psk registered
Jan 05 14:33:39 compute-0 sudo[168877]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:39 compute-0 sudo[169038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brlbkuaelkwfcjucdghhimcnaingvjve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623619.4298916-364-194436617346588/AnsiballZ_stat.py'
Jan 05 14:33:39 compute-0 sudo[169038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:40 compute-0 python3.9[169040]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:33:40 compute-0 sudo[169038]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:40 compute-0 sudo[169161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mteoartdjflrpfrajxguhnowparvnfaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623619.4298916-364-194436617346588/AnsiballZ_copy.py'
Jan 05 14:33:40 compute-0 sudo[169161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:40 compute-0 python3.9[169163]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623619.4298916-364-194436617346588/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:40 compute-0 sudo[169161]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:41 compute-0 sudo[169313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eltcysnocivhtnlamdbuerrvazwnlpzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623621.1516216-380-30567383303429/AnsiballZ_lineinfile.py'
Jan 05 14:33:41 compute-0 sudo[169313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:41 compute-0 python3.9[169315]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:41 compute-0 sudo[169313]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:42 compute-0 sudo[169476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyryppbxjtreinirefhmgmxkezqpbrvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623622.067792-388-75613846400662/AnsiballZ_systemd.py'
Jan 05 14:33:42 compute-0 sudo[169476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:42 compute-0 podman[169439]: 2026-01-05 14:33:42.581623214 +0000 UTC m=+0.176562603 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 05 14:33:42 compute-0 python3.9[169484]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:33:42 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 05 14:33:42 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 05 14:33:42 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 05 14:33:42 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 14:33:42 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 14:33:42 compute-0 sudo[169476]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:43 compute-0 sudo[169647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnszwggparqpdvurpegoxfsuruugysfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623623.199237-396-207170268227682/AnsiballZ_dnf.py'
Jan 05 14:33:43 compute-0 sudo[169647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:43 compute-0 python3.9[169649]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:33:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:33:44.782 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:33:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:33:44.782 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:33:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:33:44.782 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:33:45 compute-0 systemd[1]: Reloading.
Jan 05 14:33:45 compute-0 systemd-rc-local-generator[169678]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:45 compute-0 systemd-sysv-generator[169684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:46 compute-0 systemd[1]: Reloading.
Jan 05 14:33:46 compute-0 systemd-rc-local-generator[169709]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:46 compute-0 systemd-sysv-generator[169714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:46 compute-0 virtproxyd[156571]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 05 14:33:46 compute-0 virtnodedevd[156347]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 05 14:33:46 compute-0 virtproxyd[156571]: hostname: compute-0
Jan 05 14:33:46 compute-0 virtproxyd[156571]: nl_recv returned with error: No buffer space available
Jan 05 14:33:46 compute-0 virtnodedevd[156347]: hostname: compute-0
Jan 05 14:33:46 compute-0 virtnodedevd[156347]: nl_recv returned with error: No buffer space available
Jan 05 14:33:46 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 05 14:33:46 compute-0 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 05 14:33:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 14:33:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 14:33:46 compute-0 systemd[1]: Reloading.
Jan 05 14:33:46 compute-0 systemd-sysv-generator[169816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:46 compute-0 systemd-rc-local-generator[169809]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 14:33:47 compute-0 sudo[169647]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:48 compute-0 sudo[171057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqtgaxqsiywpbhgrzpqnzdbhrcjjhko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623627.7552059-404-182385425257247/AnsiballZ_systemd_service.py'
Jan 05 14:33:48 compute-0 sudo[171057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 14:33:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 14:33:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.640s CPU time.
Jan 05 14:33:48 compute-0 systemd[1]: run-rfe08791f8b6045efa36504dbd8c221e9.service: Deactivated successfully.
Jan 05 14:33:48 compute-0 podman[171114]: 2026-01-05 14:33:48.258069128 +0000 UTC m=+0.052182959 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 14:33:48 compute-0 python3.9[171077]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:33:48 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 05 14:33:48 compute-0 iscsid[164593]: iscsid shutting down.
Jan 05 14:33:48 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 05 14:33:48 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 05 14:33:48 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 05 14:33:48 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 05 14:33:48 compute-0 systemd[1]: Started Open-iSCSI.
Jan 05 14:33:48 compute-0 sudo[171057]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:49 compute-0 sudo[171286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avjkjddzagsgpnxlbuyggeqywxlubpgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623628.7151232-412-182204531359402/AnsiballZ_systemd_service.py'
Jan 05 14:33:49 compute-0 sudo[171286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:49 compute-0 python3.9[171288]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:33:49 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 05 14:33:49 compute-0 multipathd[168568]: exit (signal)
Jan 05 14:33:49 compute-0 multipathd[168568]: --------shut down-------
Jan 05 14:33:49 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 05 14:33:49 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 05 14:33:49 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 05 14:33:49 compute-0 multipathd[171294]: --------start up--------
Jan 05 14:33:49 compute-0 multipathd[171294]: read /etc/multipath.conf
Jan 05 14:33:49 compute-0 multipathd[171294]: path checkers start up
Jan 05 14:33:49 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 05 14:33:49 compute-0 sudo[171286]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:50 compute-0 python3.9[171451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:33:51 compute-0 sudo[171605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oimwxxfwhoxewfxitihcxhldbmohjxhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623631.085132-430-154487156933528/AnsiballZ_file.py'
Jan 05 14:33:51 compute-0 sudo[171605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:51 compute-0 python3.9[171607]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:33:51 compute-0 sudo[171605]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:52 compute-0 sudo[171757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xapjpzsszkluxtvcmysjbrfpngjfwlup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623632.0553265-441-150512565731754/AnsiballZ_systemd_service.py'
Jan 05 14:33:52 compute-0 sudo[171757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:33:52 compute-0 python3.9[171759]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:33:52 compute-0 systemd[1]: Reloading.
Jan 05 14:33:52 compute-0 systemd-sysv-generator[171791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:33:52 compute-0 systemd-rc-local-generator[171788]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:33:53 compute-0 sudo[171757]: pam_unix(sudo:session): session closed for user root
Jan 05 14:33:53 compute-0 python3.9[171945]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:33:55 compute-0 network[171962]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:33:55 compute-0 network[171963]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:33:55 compute-0 network[171964]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:33:59 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 05 14:34:00 compute-0 sudo[172235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzidbappkxbdspzqbdtvpytksdfzghua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623640.414429-460-59972734172379/AnsiballZ_systemd_service.py'
Jan 05 14:34:00 compute-0 sudo[172235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:00 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 14:34:01 compute-0 python3.9[172237]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:01 compute-0 sudo[172235]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:01 compute-0 sudo[172389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kphzchzztamcyajsfnrjlzdxbgeufrlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623641.2994711-460-152306211128176/AnsiballZ_systemd_service.py'
Jan 05 14:34:01 compute-0 sudo[172389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:01 compute-0 python3.9[172391]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:02 compute-0 sudo[172389]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:02 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 05 14:34:02 compute-0 sudo[172543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojlrlcmppqudghkpstynojprcxktltj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623642.190634-460-233965227586211/AnsiballZ_systemd_service.py'
Jan 05 14:34:02 compute-0 sudo[172543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:02 compute-0 python3.9[172545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:02 compute-0 sudo[172543]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:03 compute-0 sudo[172696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcasygxmfxdgeaadnkdbrcfubpmhqsve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623643.079854-460-195953940869739/AnsiballZ_systemd_service.py'
Jan 05 14:34:03 compute-0 sudo[172696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:03 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 05 14:34:03 compute-0 python3.9[172698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:03 compute-0 sudo[172696]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:04 compute-0 sudo[172850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtgtktzxocrymvkbbsiupaaybwgdvxqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623644.0410187-460-92174192440395/AnsiballZ_systemd_service.py'
Jan 05 14:34:04 compute-0 sudo[172850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:04 compute-0 python3.9[172852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:04 compute-0 sudo[172850]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:05 compute-0 sudo[173003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntecvvizombsnbnmvnddjwkijtozodli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623644.8807762-460-281076607017236/AnsiballZ_systemd_service.py'
Jan 05 14:34:05 compute-0 sudo[173003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:05 compute-0 python3.9[173005]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:05 compute-0 sudo[173003]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:06 compute-0 sudo[173156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmtzmaejjepbeexfloupxfbypujedeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623645.7353299-460-207457445427001/AnsiballZ_systemd_service.py'
Jan 05 14:34:06 compute-0 sudo[173156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:06 compute-0 python3.9[173158]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:06 compute-0 sudo[173156]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:06 compute-0 sudo[173309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bymlwllponwczxleplzwfifanfuiyogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623646.5824757-460-207254414255540/AnsiballZ_systemd_service.py'
Jan 05 14:34:06 compute-0 sudo[173309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:07 compute-0 python3.9[173311]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:34:07 compute-0 sudo[173309]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:07 compute-0 sudo[173462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zalfeaamchejekncrvugyhxsooyjbfqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623647.6760335-519-154714319588955/AnsiballZ_file.py'
Jan 05 14:34:07 compute-0 sudo[173462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:08 compute-0 python3.9[173464]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:08 compute-0 sudo[173462]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:08 compute-0 sudo[173614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxxpnlbftkzuasnjcznxokzcjylwngnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623648.3171864-519-184345962750693/AnsiballZ_file.py'
Jan 05 14:34:08 compute-0 sudo[173614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:08 compute-0 python3.9[173616]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:08 compute-0 sudo[173614]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:09 compute-0 sudo[173766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wukrsncuhxwxhcgoijjlvsurerrfejsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623649.0806994-519-97365880496469/AnsiballZ_file.py'
Jan 05 14:34:09 compute-0 sudo[173766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:09 compute-0 python3.9[173768]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:09 compute-0 sudo[173766]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:10 compute-0 sudo[173918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quxvvmcjcjsksjiihucknovperbiplvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623649.8411198-519-69288171075220/AnsiballZ_file.py'
Jan 05 14:34:10 compute-0 sudo[173918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:10 compute-0 python3.9[173920]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:10 compute-0 sudo[173918]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:10 compute-0 sudo[174070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkpucvdolqysvybqfwyjecwzwxribfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623650.5754464-519-9143645326418/AnsiballZ_file.py'
Jan 05 14:34:10 compute-0 sudo[174070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:11 compute-0 python3.9[174072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:11 compute-0 sudo[174070]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:11 compute-0 sudo[174222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozxfskydcgruqfwfekmjutlgeddcdsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623651.347412-519-230312055390005/AnsiballZ_file.py'
Jan 05 14:34:11 compute-0 sudo[174222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:11 compute-0 python3.9[174224]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:11 compute-0 sudo[174222]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:12 compute-0 sudo[174374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffmsvvsbiqkccotxjkynjngcmyjfnwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623652.0747745-519-224694800403604/AnsiballZ_file.py'
Jan 05 14:34:12 compute-0 sudo[174374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:12 compute-0 python3.9[174376]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:12 compute-0 sudo[174374]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:13 compute-0 sudo[174536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvgzcigbhlpbuceyzikqqqmtrtjmjvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623652.7517807-519-107745009492296/AnsiballZ_file.py'
Jan 05 14:34:13 compute-0 sudo[174536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:13 compute-0 podman[174500]: 2026-01-05 14:34:13.177761889 +0000 UTC m=+0.133733511 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:34:13 compute-0 python3.9[174545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:13 compute-0 sudo[174536]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:13 compute-0 sudo[174704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sezcynlbozuvdvqhloetwwjnseojcmab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623653.5257998-576-39040959509001/AnsiballZ_file.py'
Jan 05 14:34:13 compute-0 sudo[174704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:14 compute-0 python3.9[174706]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:14 compute-0 sudo[174704]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:14 compute-0 sudo[174856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbzhxpramvcxjtzmxwidqsovhruphvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623654.2018251-576-226875653405034/AnsiballZ_file.py'
Jan 05 14:34:14 compute-0 sudo[174856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:14 compute-0 python3.9[174858]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:14 compute-0 sudo[174856]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:15 compute-0 sudo[175008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceehgqdsvqtozvqpkebzkdapfdpymoap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623654.9679563-576-235660857538971/AnsiballZ_file.py'
Jan 05 14:34:15 compute-0 sudo[175008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:15 compute-0 python3.9[175010]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:15 compute-0 sudo[175008]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:16 compute-0 sudo[175160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukthkhbghzfqnobthjfjjrebfagohkwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623655.7528908-576-12464377388859/AnsiballZ_file.py'
Jan 05 14:34:16 compute-0 sudo[175160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:16 compute-0 python3.9[175162]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:16 compute-0 sudo[175160]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:16 compute-0 sudo[175312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojrjuwtwywfepmflrcbrgciecbbkptdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623656.4420576-576-69049228003121/AnsiballZ_file.py'
Jan 05 14:34:16 compute-0 sudo[175312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:16 compute-0 python3.9[175314]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:16 compute-0 sudo[175312]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:17 compute-0 sudo[175464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqttykvcyzqoiuobyekhdowgwefudrud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623657.1266165-576-49509136032135/AnsiballZ_file.py'
Jan 05 14:34:17 compute-0 sudo[175464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:17 compute-0 python3.9[175466]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:17 compute-0 sudo[175464]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:18 compute-0 sudo[175629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afqitrymdxnlbjaszrkstmzfxpaagpyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623658.0602276-576-135695404185287/AnsiballZ_file.py'
Jan 05 14:34:18 compute-0 sudo[175629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:18 compute-0 podman[175590]: 2026-01-05 14:34:18.408456904 +0000 UTC m=+0.073560754 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:34:18 compute-0 python3.9[175637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:18 compute-0 sudo[175629]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:19 compute-0 sudo[175787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czlmyzahglwtjdlwxumwbnhxixzaymqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623658.7545733-576-253610848536103/AnsiballZ_file.py'
Jan 05 14:34:19 compute-0 sudo[175787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:19 compute-0 python3.9[175789]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:19 compute-0 sudo[175787]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:20 compute-0 sudo[175939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjokpunnuwernfomytqjvyyyvqhmczn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623659.7021105-634-33088023366339/AnsiballZ_command.py'
Jan 05 14:34:20 compute-0 sudo[175939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:20 compute-0 python3.9[175941]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:20 compute-0 sudo[175939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:21 compute-0 python3.9[176093]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:34:22 compute-0 sudo[176243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrtfcepdgmynythgxdqgvyenlkeokpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623661.6708088-652-158922650313024/AnsiballZ_systemd_service.py'
Jan 05 14:34:22 compute-0 sudo[176243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:22 compute-0 python3.9[176245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:34:22 compute-0 systemd[1]: Reloading.
Jan 05 14:34:22 compute-0 systemd-rc-local-generator[176272]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:34:22 compute-0 systemd-sysv-generator[176275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:34:22 compute-0 sudo[176243]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:23 compute-0 sudo[176429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adbswvlryegvufzkfphcmmfmedfdoecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623662.7885678-660-255416108200964/AnsiballZ_command.py'
Jan 05 14:34:23 compute-0 sudo[176429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:23 compute-0 python3.9[176431]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:23 compute-0 sudo[176429]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:24 compute-0 sudo[176582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsiksnzucllpxriieqwesgokpjtnvjbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623663.5146847-660-51699895560116/AnsiballZ_command.py'
Jan 05 14:34:24 compute-0 sudo[176582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:24 compute-0 python3.9[176584]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:24 compute-0 sudo[176582]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:25 compute-0 sudo[176735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdfzqugbxoykwnrphclsguzdrdwhjzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623665.0173872-660-102438347015726/AnsiballZ_command.py'
Jan 05 14:34:25 compute-0 sudo[176735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:25 compute-0 python3.9[176737]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:25 compute-0 sudo[176735]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:26 compute-0 sudo[176888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceqzkdducrpvdlerqmwtlkemapntxkea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623665.8971725-660-9447019199106/AnsiballZ_command.py'
Jan 05 14:34:26 compute-0 sudo[176888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:26 compute-0 python3.9[176890]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:26 compute-0 sudo[176888]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:27 compute-0 sudo[177041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvtrswwkvsxgvmegziqdowhavckpcuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623666.66498-660-169166385483647/AnsiballZ_command.py'
Jan 05 14:34:27 compute-0 sudo[177041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:27 compute-0 python3.9[177043]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:27 compute-0 sudo[177041]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:27 compute-0 sudo[177194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laltuknanyjxirtncizykoxqsxdbfdux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623667.4944227-660-204444031329998/AnsiballZ_command.py'
Jan 05 14:34:27 compute-0 sudo[177194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:28 compute-0 python3.9[177196]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:28 compute-0 sudo[177194]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:28 compute-0 sudo[177347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdajcccbkprlahhcwpmghpnopsruuhgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623668.2823217-660-152046403629144/AnsiballZ_command.py'
Jan 05 14:34:28 compute-0 sudo[177347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:28 compute-0 python3.9[177349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:28 compute-0 sudo[177347]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:29 compute-0 sudo[177500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsphpmzcpburhexvqlccctirruprjrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623669.076339-660-184851750631907/AnsiballZ_command.py'
Jan 05 14:34:29 compute-0 sudo[177500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:29 compute-0 python3.9[177502]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:34:29 compute-0 sudo[177500]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:31 compute-0 sudo[177653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhollagihbyuxdbvhmmtisjykrfncsyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623670.7499611-739-235806184418321/AnsiballZ_file.py'
Jan 05 14:34:31 compute-0 sudo[177653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:31 compute-0 python3.9[177655]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:31 compute-0 sudo[177653]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:31 compute-0 sudo[177805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icipuxwlicuomqptvtczhnhalzxjlkjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623671.5270867-739-151317356996688/AnsiballZ_file.py'
Jan 05 14:34:31 compute-0 sudo[177805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:32 compute-0 python3.9[177807]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:32 compute-0 sudo[177805]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:32 compute-0 sudo[177957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwcmivuvgjtnnaqkmyrridoatmftaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623672.2831357-739-211447388950590/AnsiballZ_file.py'
Jan 05 14:34:32 compute-0 sudo[177957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:32 compute-0 python3.9[177959]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:32 compute-0 sudo[177957]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:33 compute-0 sudo[178109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpjmihffwqcxfpdoeqtmuchwoqrstjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623673.1525688-761-220505089245148/AnsiballZ_file.py'
Jan 05 14:34:33 compute-0 sudo[178109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:33 compute-0 python3.9[178111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:33 compute-0 sudo[178109]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:34 compute-0 sudo[178261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opotwfkrhggqfipntpqoijlmwrbqtnxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623673.9754362-761-101810849084748/AnsiballZ_file.py'
Jan 05 14:34:34 compute-0 sudo[178261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:34 compute-0 python3.9[178263]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:34 compute-0 sudo[178261]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:34 compute-0 sudo[178413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suukxcbwhssonfwzauizjuhlrhwvgewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623674.657291-761-239450856244426/AnsiballZ_file.py'
Jan 05 14:34:34 compute-0 sudo[178413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:35 compute-0 python3.9[178415]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:35 compute-0 sudo[178413]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:35 compute-0 sudo[178565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoykynhkkloaxrhhnzzdtxxhnvrbgzfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623675.4573936-761-140742499429828/AnsiballZ_file.py'
Jan 05 14:34:35 compute-0 sudo[178565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:36 compute-0 python3.9[178567]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:36 compute-0 sudo[178565]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:36 compute-0 sudo[178717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vscddfehusygwooupewsdfivzlpypasx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623676.3149374-761-124655870212086/AnsiballZ_file.py'
Jan 05 14:34:36 compute-0 sudo[178717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:36 compute-0 python3.9[178719]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:36 compute-0 sudo[178717]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:37 compute-0 sudo[178869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhxdbylsaiqmduzcpcvmbtfokldjyka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623677.1246536-761-71092945655299/AnsiballZ_file.py'
Jan 05 14:34:37 compute-0 sudo[178869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:37 compute-0 python3.9[178871]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:37 compute-0 sudo[178869]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:38 compute-0 sudo[179021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxvmtqfbirbqyfwvrdmhgrbsvujbogp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623677.8853693-761-72760306312590/AnsiballZ_file.py'
Jan 05 14:34:38 compute-0 sudo[179021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:38 compute-0 python3.9[179023]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:38 compute-0 sudo[179021]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:43 compute-0 sudo[179188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnyfesltsxbaictdbwcpzfdwbsunsevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623682.7949076-930-2677214101172/AnsiballZ_getent.py'
Jan 05 14:34:43 compute-0 sudo[179188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:43 compute-0 podman[179147]: 2026-01-05 14:34:43.408475771 +0000 UTC m=+0.123907589 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 14:34:43 compute-0 python3.9[179196]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 05 14:34:43 compute-0 sudo[179188]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:44 compute-0 sudo[179352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avyhwltermaywqlegkjcvjuyyrszkgwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623683.8201063-938-82453386735490/AnsiballZ_group.py'
Jan 05 14:34:44 compute-0 sudo[179352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:44 compute-0 python3.9[179354]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:34:44 compute-0 groupadd[179355]: group added to /etc/group: name=nova, GID=42436
Jan 05 14:34:44 compute-0 groupadd[179355]: group added to /etc/gshadow: name=nova
Jan 05 14:34:44 compute-0 groupadd[179355]: new group: name=nova, GID=42436
Jan 05 14:34:44 compute-0 sudo[179352]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:34:44.783 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:34:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:34:44.784 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:34:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:34:44.784 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:34:45 compute-0 sudo[179510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzelypxwrkfclffjlqknbxekqfyofdbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623684.855261-946-117201115409120/AnsiballZ_user.py'
Jan 05 14:34:45 compute-0 sudo[179510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:45 compute-0 python3.9[179512]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 14:34:45 compute-0 useradd[179514]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 05 14:34:45 compute-0 useradd[179514]: add 'nova' to group 'libvirt'
Jan 05 14:34:45 compute-0 useradd[179514]: add 'nova' to shadow group 'libvirt'
Jan 05 14:34:45 compute-0 sudo[179510]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:46 compute-0 sshd-session[179545]: Accepted publickey for zuul from 192.168.122.30 port 35822 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:34:46 compute-0 systemd-logind[795]: New session 25 of user zuul.
Jan 05 14:34:46 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 05 14:34:46 compute-0 sshd-session[179545]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:34:47 compute-0 sshd-session[179548]: Received disconnect from 192.168.122.30 port 35822:11: disconnected by user
Jan 05 14:34:47 compute-0 sshd-session[179548]: Disconnected from user zuul 192.168.122.30 port 35822
Jan 05 14:34:47 compute-0 sshd-session[179545]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:34:47 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 05 14:34:47 compute-0 systemd-logind[795]: Session 25 logged out. Waiting for processes to exit.
Jan 05 14:34:47 compute-0 systemd-logind[795]: Removed session 25.
Jan 05 14:34:47 compute-0 python3.9[179698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:48 compute-0 python3.9[179819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623687.3065054-971-104182202082577/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:48 compute-0 podman[179820]: 2026-01-05 14:34:48.569074921 +0000 UTC m=+0.085286097 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 14:34:49 compute-0 python3.9[179989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:49 compute-0 python3.9[180065]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:50 compute-0 python3.9[180215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:51 compute-0 python3.9[180336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623689.8999045-971-146161498479888/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:51 compute-0 python3.9[180486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:52 compute-0 python3.9[180607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623691.4128659-971-73164972938138/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:53 compute-0 python3.9[180757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:53 compute-0 python3.9[180878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623692.668207-971-116644189602665/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:54 compute-0 python3.9[181028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:55 compute-0 python3.9[181149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623694.0644927-971-163434996501462/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:34:55 compute-0 sudo[181299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhveixqolkmwhxdckwkodqjdtphenzkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623695.4893594-1054-185766007136379/AnsiballZ_file.py'
Jan 05 14:34:55 compute-0 sudo[181299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:56 compute-0 python3.9[181301]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:56 compute-0 sudo[181299]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:56 compute-0 sudo[181451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzndqbyjprgmpgnoqlwarmaiizsweph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623696.4170897-1062-261992549027842/AnsiballZ_copy.py'
Jan 05 14:34:56 compute-0 sudo[181451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:57 compute-0 python3.9[181453]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:34:57 compute-0 sudo[181451]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:57 compute-0 sudo[181603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvddugiuqvospqvtzhrnepxrnkmtcts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623697.325246-1070-15844397562312/AnsiballZ_stat.py'
Jan 05 14:34:57 compute-0 sudo[181603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:57 compute-0 python3.9[181605]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:34:57 compute-0 sudo[181603]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:58 compute-0 sudo[181755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqpqetpctopspejrfbhqvbngecflqxkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623698.128602-1078-103155756843174/AnsiballZ_stat.py'
Jan 05 14:34:58 compute-0 sudo[181755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:58 compute-0 python3.9[181757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:34:58 compute-0 sudo[181755]: pam_unix(sudo:session): session closed for user root
Jan 05 14:34:59 compute-0 sudo[181878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iklyorloezqxhxkcqmmwigkxkedhfuwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623698.128602-1078-103155756843174/AnsiballZ_copy.py'
Jan 05 14:34:59 compute-0 sudo[181878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:34:59 compute-0 python3.9[181880]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1767623698.128602-1078-103155756843174/.source _original_basename=.qvt8vqo_ follow=False checksum=31007ca011abcf7230e48a9131672432e0591cb6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 05 14:34:59 compute-0 sudo[181878]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:00 compute-0 python3.9[182032]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:01 compute-0 python3.9[182184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:35:01 compute-0 python3.9[182305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623700.5377133-1104-153661415742889/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:35:02 compute-0 python3.9[182455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:35:03 compute-0 python3.9[182576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623702.0207386-1119-112465715535756/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:35:04 compute-0 sudo[182726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeiswyeejjcwithtamztdqyusczwmdjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623703.5972817-1136-223617344334316/AnsiballZ_container_config_data.py'
Jan 05 14:35:04 compute-0 sudo[182726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:04 compute-0 python3.9[182728]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 05 14:35:04 compute-0 sudo[182726]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:05 compute-0 sudo[182878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjtjueramaxbgaxrkvnablaiobsuqpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623704.73331-1147-9640057010497/AnsiballZ_container_config_hash.py'
Jan 05 14:35:05 compute-0 sudo[182878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:05 compute-0 python3.9[182880]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:35:05 compute-0 sudo[182878]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:06 compute-0 sudo[183030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swgcfnokdobrwgpxyjoowkuyghdytzeb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623706.0246503-1157-181057799392250/AnsiballZ_edpm_container_manage.py'
Jan 05 14:35:06 compute-0 sudo[183030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:06 compute-0 python3[183032]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:35:07 compute-0 podman[183067]: 2026-01-05 14:35:07.218749322 +0000 UTC m=+0.082192826 container create 6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2)
Jan 05 14:35:07 compute-0 podman[183067]: 2026-01-05 14:35:07.180104258 +0000 UTC m=+0.043547812 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 14:35:07 compute-0 python3[183032]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 05 14:35:07 compute-0 sudo[183030]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:08 compute-0 sudo[183255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwflvbyudsuawfoiwozkjbxqthwdoewz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623707.6680434-1165-143437893063667/AnsiballZ_stat.py'
Jan 05 14:35:08 compute-0 sudo[183255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:08 compute-0 python3.9[183257]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:08 compute-0 sudo[183255]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:09 compute-0 sudo[183409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdbyexzkyfwqyrivzmsvmbtcxeiijdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623708.852596-1177-65750126324052/AnsiballZ_container_config_data.py'
Jan 05 14:35:09 compute-0 sudo[183409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:09 compute-0 python3.9[183411]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 05 14:35:09 compute-0 sudo[183409]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:10 compute-0 sudo[183561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btntgcypgpgymlrcghspljqbxrdstmmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623709.825956-1188-98860754896006/AnsiballZ_container_config_hash.py'
Jan 05 14:35:10 compute-0 sudo[183561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:10 compute-0 python3.9[183563]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:35:10 compute-0 sudo[183561]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:11 compute-0 sudo[183713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isaxyjvizmotvzlkzbqfszndmtuovyzs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623710.9086401-1198-71194686030496/AnsiballZ_edpm_container_manage.py'
Jan 05 14:35:11 compute-0 sudo[183713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:11 compute-0 python3[183715]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:35:11 compute-0 podman[183752]: 2026-01-05 14:35:11.809053797 +0000 UTC m=+0.074304592 container create a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 05 14:35:11 compute-0 podman[183752]: 2026-01-05 14:35:11.770165226 +0000 UTC m=+0.035416101 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 14:35:11 compute-0 python3[183715]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 05 14:35:12 compute-0 sudo[183713]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:12 compute-0 sudo[183940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxxoksiygtaudwzlbcflyklxcbydcguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623712.2400272-1206-279148651525285/AnsiballZ_stat.py'
Jan 05 14:35:12 compute-0 sudo[183940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:12 compute-0 python3.9[183942]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:12 compute-0 sudo[183940]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:13 compute-0 sudo[184106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmiynbciygncbssvihcoiehrgcmetfpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623713.2117805-1215-216804973404026/AnsiballZ_file.py'
Jan 05 14:35:13 compute-0 sudo[184106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:13 compute-0 podman[184068]: 2026-01-05 14:35:13.639853796 +0000 UTC m=+0.129008173 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 14:35:13 compute-0 python3.9[184112]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:13 compute-0 sudo[184106]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:14 compute-0 sudo[184270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjqwbkhewrvwypsjqysbnflndqmjdnux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623713.9026783-1215-187334253307610/AnsiballZ_copy.py'
Jan 05 14:35:14 compute-0 sudo[184270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:14 compute-0 python3.9[184272]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623713.9026783-1215-187334253307610/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:14 compute-0 sudo[184270]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:15 compute-0 sudo[184346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozlmnseikbiktirsovqbmqymodqqrjzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623713.9026783-1215-187334253307610/AnsiballZ_systemd.py'
Jan 05 14:35:15 compute-0 sudo[184346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:15 compute-0 python3.9[184348]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:35:15 compute-0 systemd[1]: Reloading.
Jan 05 14:35:15 compute-0 systemd-sysv-generator[184379]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:35:15 compute-0 systemd-rc-local-generator[184374]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:35:15 compute-0 sudo[184346]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:16 compute-0 sudo[184457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxvciqwszroecldgvgqxihypjgzvdpnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623713.9026783-1215-187334253307610/AnsiballZ_systemd.py'
Jan 05 14:35:16 compute-0 sudo[184457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:16 compute-0 python3.9[184459]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:35:16 compute-0 systemd[1]: Reloading.
Jan 05 14:35:16 compute-0 systemd-rc-local-generator[184486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:35:16 compute-0 systemd-sysv-generator[184492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:35:16 compute-0 systemd[1]: Starting nova_compute container...
Jan 05 14:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:16 compute-0 podman[184499]: 2026-01-05 14:35:16.987884099 +0000 UTC m=+0.145367658 container init a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:35:17 compute-0 podman[184499]: 2026-01-05 14:35:16.999932692 +0000 UTC m=+0.157416211 container start a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.build-date=20251202)
Jan 05 14:35:17 compute-0 podman[184499]: nova_compute
Jan 05 14:35:17 compute-0 nova_compute[184514]: + sudo -E kolla_set_configs
Jan 05 14:35:17 compute-0 systemd[1]: Started nova_compute container.
Jan 05 14:35:17 compute-0 sudo[184457]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Validating config file
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying service configuration files
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Deleting /etc/ceph
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Creating directory /etc/ceph
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /etc/ceph
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Writing out command to execute
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:17 compute-0 nova_compute[184514]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 14:35:17 compute-0 nova_compute[184514]: ++ cat /run_command
Jan 05 14:35:17 compute-0 nova_compute[184514]: + CMD=nova-compute
Jan 05 14:35:17 compute-0 nova_compute[184514]: + ARGS=
Jan 05 14:35:17 compute-0 nova_compute[184514]: + sudo kolla_copy_cacerts
Jan 05 14:35:17 compute-0 nova_compute[184514]: + [[ ! -n '' ]]
Jan 05 14:35:17 compute-0 nova_compute[184514]: + . kolla_extend_start
Jan 05 14:35:17 compute-0 nova_compute[184514]: Running command: 'nova-compute'
Jan 05 14:35:17 compute-0 nova_compute[184514]: + echo 'Running command: '\''nova-compute'\'''
Jan 05 14:35:17 compute-0 nova_compute[184514]: + umask 0022
Jan 05 14:35:17 compute-0 nova_compute[184514]: + exec nova-compute
Jan 05 14:35:18 compute-0 python3.9[184676]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:18 compute-0 podman[184800]: 2026-01-05 14:35:18.873692047 +0000 UTC m=+0.090863351 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 14:35:18 compute-0 nova_compute[184514]: 2026-01-05 14:35:18.885 184518 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:18 compute-0 nova_compute[184514]: 2026-01-05 14:35:18.885 184518 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:18 compute-0 nova_compute[184514]: 2026-01-05 14:35:18.885 184518 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:18 compute-0 nova_compute[184514]: 2026-01-05 14:35:18.886 184518 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 05 14:35:19 compute-0 nova_compute[184514]: 2026-01-05 14:35:19.004 184518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:35:19 compute-0 nova_compute[184514]: 2026-01-05 14:35:19.032 184518 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:35:19 compute-0 nova_compute[184514]: 2026-01-05 14:35:19.032 184518 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 05 14:35:19 compute-0 python3.9[184838]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:20 compute-0 python3.9[184997]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.247 184518 INFO nova.virt.driver [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.346 184518 INFO nova.compute.provider_config [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.369 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.369 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.370 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.371 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.372 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.373 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.374 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.375 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.376 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.376 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.376 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.376 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.376 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.377 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.377 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.377 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.377 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.377 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.378 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.378 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.378 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.378 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.378 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.379 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.380 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.381 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.382 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.383 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.384 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.385 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.386 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.387 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.388 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.389 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.390 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.391 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.392 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.393 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.394 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.395 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.396 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.397 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.398 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.399 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.400 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.400 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.400 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.400 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.400 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.401 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.402 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.403 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.404 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.405 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.405 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.405 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.405 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.405 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.406 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.407 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.408 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.409 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.410 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.411 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.412 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.413 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.414 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.415 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.416 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.417 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.418 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.419 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.420 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.421 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.422 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.423 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.424 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.425 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.426 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.426 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.426 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.426 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.426 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.427 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.427 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.427 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.427 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.427 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.428 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.429 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.430 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.431 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.432 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.433 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.434 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.435 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.436 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.436 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.436 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.436 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.436 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.437 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.437 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.437 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.437 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.438 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.438 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.438 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.438 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.438 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.439 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.439 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.439 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.439 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.439 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.440 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.440 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.440 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.440 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.440 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.441 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.441 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.441 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.441 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.441 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.442 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.442 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.442 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.442 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.442 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.443 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.443 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.443 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.443 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.443 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.444 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.444 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.444 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.444 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.444 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.445 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.445 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.445 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.445 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.445 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.446 184518 WARNING oslo_config.cfg [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 05 14:35:20 compute-0 nova_compute[184514]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 05 14:35:20 compute-0 nova_compute[184514]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 05 14:35:20 compute-0 nova_compute[184514]: and ``live_migration_inbound_addr`` respectively.
Jan 05 14:35:20 compute-0 nova_compute[184514]: ).  Its value may be silently ignored in the future.
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.446 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.446 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.446 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.447 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.447 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.447 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.447 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.448 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.448 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.448 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.448 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.448 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.449 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.449 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.449 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.449 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.449 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.450 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.450 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.450 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.450 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.450 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.451 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.451 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.451 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.451 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.451 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.452 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.452 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.452 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.452 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.453 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.454 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.455 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.456 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.457 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.458 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.459 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.460 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.461 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.462 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.463 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.464 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.465 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.466 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.467 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.468 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.469 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.470 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.471 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.472 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.473 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.474 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.475 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.476 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.477 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.478 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.479 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.480 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.481 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.482 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.483 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.484 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.485 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.486 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.487 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.488 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.489 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.490 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.491 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.492 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.493 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.494 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.495 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.496 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.497 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.498 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.499 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.500 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.501 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.502 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.503 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.504 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.505 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.506 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.506 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.506 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.506 184518 DEBUG oslo_service.service [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.507 184518 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.519 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.520 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.520 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.520 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 05 14:35:20 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 05 14:35:20 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.642 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff3c3c12220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.644 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff3c3c12220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.645 184518 INFO nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Connection event '1' reason 'None'
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.667 184518 WARNING nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 14:35:20 compute-0 nova_compute[184514]: 2026-01-05 14:35:20.667 184518 DEBUG nova.virt.libvirt.volume.mount [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 05 14:35:20 compute-0 sudo[185199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnmcweihepizhgqurasehtafxpxrylup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623720.3495831-1275-278407270043036/AnsiballZ_podman_container.py'
Jan 05 14:35:20 compute-0 sudo[185199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:21 compute-0 python3.9[185201]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 05 14:35:21 compute-0 sudo[185199]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:21 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:35:21 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.629 184518 INFO nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Libvirt host capabilities <capabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]: 
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <host>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <uuid>21aea88d-e46b-43ca-a852-7ac5c1bf4054</uuid>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <arch>x86_64</arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model>EPYC-Rome-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <vendor>AMD</vendor>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <microcode version='16777317'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <signature family='23' model='49' stepping='0'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='x2apic'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='tsc-deadline'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='osxsave'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='hypervisor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='tsc_adjust'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='spec-ctrl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='stibp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='arch-capabilities'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='cmp_legacy'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='topoext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='virt-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='lbrv'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='tsc-scale'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='vmcb-clean'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='pause-filter'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='pfthreshold'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='svme-addr-chk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='rdctl-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='skip-l1dfl-vmentry'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='mds-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature name='pschange-mc-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <pages unit='KiB' size='4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <pages unit='KiB' size='2048'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <pages unit='KiB' size='1048576'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <power_management>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <suspend_mem/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <suspend_disk/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <suspend_hybrid/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </power_management>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <iommu support='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <migration_features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <live/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <uri_transports>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <uri_transport>tcp</uri_transport>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <uri_transport>rdma</uri_transport>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </uri_transports>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </migration_features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <topology>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <cells num='1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <cell id='0'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <memory unit='KiB'>7864308</memory>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <pages unit='KiB' size='2048'>0</pages>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <distances>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <sibling id='0' value='10'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           </distances>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           <cpus num='8'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:           </cpus>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         </cell>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </cells>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </topology>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <cache>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </cache>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <secmodel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model>selinux</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <doi>0</doi>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </secmodel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <secmodel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model>dac</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <doi>0</doi>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </secmodel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </host>
Jan 05 14:35:21 compute-0 nova_compute[184514]: 
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <guest>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <os_type>hvm</os_type>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <arch name='i686'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <wordsize>32</wordsize>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <domain type='qemu'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <domain type='kvm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <pae/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <nonpae/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <acpi default='on' toggle='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <apic default='on' toggle='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <cpuselection/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <deviceboot/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <disksnapshot default='on' toggle='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <externalSnapshot/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </guest>
Jan 05 14:35:21 compute-0 nova_compute[184514]: 
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <guest>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <os_type>hvm</os_type>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <arch name='x86_64'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <wordsize>64</wordsize>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <domain type='qemu'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <domain type='kvm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <acpi default='on' toggle='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <apic default='on' toggle='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <cpuselection/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <deviceboot/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <disksnapshot default='on' toggle='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <externalSnapshot/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </guest>
Jan 05 14:35:21 compute-0 nova_compute[184514]: 
Jan 05 14:35:21 compute-0 nova_compute[184514]: </capabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]: 
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.643 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.671 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 05 14:35:21 compute-0 nova_compute[184514]: <domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <domain>kvm</domain>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <arch>i686</arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <vcpu max='4096'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <iothreads supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <os supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='firmware'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <loader supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>rom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pflash</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='readonly'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>yes</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='secure'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </loader>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </os>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='maximumMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <vendor>AMD</vendor>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='succor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='custom' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-128'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-256'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-512'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <memoryBacking supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='sourceType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>anonymous</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>memfd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </memoryBacking>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <disk supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='diskDevice'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>disk</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cdrom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>floppy</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>lun</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>fdc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>sata</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </disk>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <graphics supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vnc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egl-headless</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </graphics>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <video supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='modelType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vga</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cirrus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>none</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>bochs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ramfb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </video>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hostdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='mode'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>subsystem</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='startupPolicy'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>mandatory</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>requisite</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>optional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='subsysType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pci</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='capsType'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='pciBackend'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hostdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <rng supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>random</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </rng>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <filesystem supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='driverType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>path</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>handle</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtiofs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </filesystem>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <tpm supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-tis</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-crb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emulator</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>external</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendVersion'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>2.0</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </tpm>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <redirdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </redirdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <channel supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </channel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <crypto supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </crypto>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <interface supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>passt</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </interface>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <panic supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>isa</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>hyperv</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </panic>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <console supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>null</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dev</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pipe</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stdio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>udp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tcp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu-vdagent</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </console>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <gic supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <genid supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backup supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <async-teardown supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <ps2 supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sev supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sgx supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hyperv supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='features'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>relaxed</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vapic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>spinlocks</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vpindex</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>runtime</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>synic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stimer</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reset</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vendor_id</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>frequencies</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reenlightenment</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tlbflush</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ipi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>avic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emsr_bitmap</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>xmm_input</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hyperv>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <launchSecurity supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='sectype'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tdx</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </launchSecurity>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]: </domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.680 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 05 14:35:21 compute-0 nova_compute[184514]: <domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <domain>kvm</domain>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <arch>i686</arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <vcpu max='240'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <iothreads supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <os supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='firmware'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <loader supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>rom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pflash</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='readonly'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>yes</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='secure'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </loader>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </os>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='maximumMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <vendor>AMD</vendor>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='succor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='custom' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-128'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-256'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-512'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <memoryBacking supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='sourceType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>anonymous</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>memfd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </memoryBacking>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <disk supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='diskDevice'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>disk</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cdrom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>floppy</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>lun</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ide</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>fdc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>sata</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </disk>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <graphics supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vnc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egl-headless</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </graphics>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <video supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='modelType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vga</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cirrus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>none</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>bochs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ramfb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </video>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hostdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='mode'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>subsystem</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='startupPolicy'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>mandatory</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>requisite</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>optional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='subsysType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pci</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='capsType'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='pciBackend'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hostdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <rng supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>random</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </rng>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <filesystem supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='driverType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>path</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>handle</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtiofs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </filesystem>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <tpm supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-tis</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-crb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emulator</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>external</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendVersion'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>2.0</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </tpm>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <redirdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </redirdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <channel supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </channel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <crypto supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </crypto>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <interface supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>passt</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </interface>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <panic supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>isa</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>hyperv</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </panic>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <console supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>null</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dev</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pipe</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stdio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>udp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tcp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu-vdagent</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </console>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <gic supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <genid supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backup supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <async-teardown supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <ps2 supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sev supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sgx supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hyperv supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='features'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>relaxed</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vapic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>spinlocks</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vpindex</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>runtime</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>synic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stimer</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reset</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vendor_id</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>frequencies</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reenlightenment</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tlbflush</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ipi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>avic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emsr_bitmap</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>xmm_input</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hyperv>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <launchSecurity supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='sectype'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tdx</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </launchSecurity>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]: </domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.730 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.737 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 05 14:35:21 compute-0 nova_compute[184514]: <domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <domain>kvm</domain>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <arch>x86_64</arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <vcpu max='4096'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <iothreads supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <os supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='firmware'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>efi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <loader supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>rom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pflash</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='readonly'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>yes</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='secure'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>yes</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </loader>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </os>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='maximumMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <vendor>AMD</vendor>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='succor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='custom' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-128'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-256'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-512'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <memoryBacking supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='sourceType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>anonymous</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>memfd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </memoryBacking>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <disk supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='diskDevice'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>disk</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cdrom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>floppy</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>lun</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>fdc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>sata</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </disk>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <graphics supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vnc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egl-headless</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </graphics>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <video supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='modelType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vga</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cirrus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>none</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>bochs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ramfb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </video>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hostdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='mode'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>subsystem</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='startupPolicy'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>mandatory</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>requisite</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>optional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='subsysType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pci</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='capsType'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='pciBackend'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hostdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <rng supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>random</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </rng>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <filesystem supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='driverType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>path</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>handle</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtiofs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </filesystem>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <tpm supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-tis</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-crb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emulator</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>external</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendVersion'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>2.0</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </tpm>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <redirdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </redirdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <channel supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </channel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <crypto supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </crypto>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <interface supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>passt</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </interface>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <panic supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>isa</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>hyperv</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </panic>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <console supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>null</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dev</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pipe</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stdio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>udp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tcp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu-vdagent</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </console>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <gic supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <genid supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backup supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <async-teardown supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <ps2 supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sev supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sgx supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hyperv supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='features'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>relaxed</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vapic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>spinlocks</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vpindex</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>runtime</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>synic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stimer</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reset</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vendor_id</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>frequencies</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reenlightenment</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tlbflush</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ipi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>avic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emsr_bitmap</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>xmm_input</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hyperv>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <launchSecurity supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='sectype'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tdx</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </launchSecurity>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]: </domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.796 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 05 14:35:21 compute-0 nova_compute[184514]: <domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <domain>kvm</domain>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <arch>x86_64</arch>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <vcpu max='240'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <iothreads supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <os supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='firmware'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <loader supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>rom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pflash</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='readonly'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>yes</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='secure'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>no</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </loader>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </os>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='maximumMigratable'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>on</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>off</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <vendor>AMD</vendor>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='succor'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <mode name='custom' supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Denverton-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='auto-ibrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amd-psfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='stibp-always-on'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='EPYC-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-128'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-256'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx10-512'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='prefetchiti'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Haswell-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512er'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512pf'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fma4'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tbm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xop'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='amx-tile'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-bf16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-fp16'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bitalg'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrc'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fzrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='la57'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='taa-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xfd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ifma'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cmpccxadd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fbsdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='fsrs'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ibrs-all'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mcdt-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pbrsb-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='psdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='serialize'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vaes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='hle'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='rtm'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512bw'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512cd'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512dq'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512f'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='avx512vl'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='invpcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pcid'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='pku'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='mpx'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='core-capability'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='split-lock-detect'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='cldemote'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='erms'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='gfni'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdir64b'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='movdiri'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='xsaves'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='athlon-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='core2duo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='coreduo-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='n270-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='ss'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <blockers model='phenom-v1'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnow'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <feature name='3dnowext'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </blockers>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </mode>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </cpu>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <memoryBacking supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <enum name='sourceType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>anonymous</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <value>memfd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </memoryBacking>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <disk supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='diskDevice'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>disk</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cdrom</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>floppy</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>lun</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ide</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>fdc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>sata</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </disk>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <graphics supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vnc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egl-headless</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </graphics>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <video supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='modelType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vga</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>cirrus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>none</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>bochs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ramfb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </video>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hostdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='mode'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>subsystem</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='startupPolicy'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>mandatory</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>requisite</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>optional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='subsysType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pci</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>scsi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='capsType'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='pciBackend'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hostdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <rng supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtio-non-transitional</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>random</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>egd</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </rng>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <filesystem supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='driverType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>path</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>handle</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>virtiofs</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </filesystem>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <tpm supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-tis</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tpm-crb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emulator</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>external</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendVersion'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>2.0</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </tpm>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <redirdev supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='bus'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>usb</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </redirdev>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <channel supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </channel>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <crypto supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendModel'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>builtin</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </crypto>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <interface supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='backendType'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>default</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>passt</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </interface>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <panic supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='model'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>isa</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>hyperv</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </panic>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <console supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='type'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>null</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vc</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pty</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dev</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>file</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>pipe</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stdio</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>udp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tcp</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>unix</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>qemu-vdagent</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>dbus</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </console>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </devices>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   <features>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <gic supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <genid supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <backup supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <async-teardown supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <ps2 supported='yes'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sev supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <sgx supported='no'/>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <hyperv supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='features'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>relaxed</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vapic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>spinlocks</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vpindex</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>runtime</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>synic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>stimer</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reset</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>vendor_id</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>frequencies</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>reenlightenment</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tlbflush</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>ipi</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>avic</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>emsr_bitmap</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>xmm_input</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </defaults>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </hyperv>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     <launchSecurity supported='yes'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       <enum name='sectype'>
Jan 05 14:35:21 compute-0 nova_compute[184514]:         <value>tdx</value>
Jan 05 14:35:21 compute-0 nova_compute[184514]:       </enum>
Jan 05 14:35:21 compute-0 nova_compute[184514]:     </launchSecurity>
Jan 05 14:35:21 compute-0 nova_compute[184514]:   </features>
Jan 05 14:35:21 compute-0 nova_compute[184514]: </domainCapabilities>
Jan 05 14:35:21 compute-0 nova_compute[184514]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.852 184518 DEBUG nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.853 184518 INFO nova.virt.libvirt.host [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Secure Boot support detected
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.859 184518 INFO nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.859 184518 INFO nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.873 184518 DEBUG nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 05 14:35:21 compute-0 nova_compute[184514]: 2026-01-05 14:35:21.974 184518 INFO nova.virt.node [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Determined node identity 81b80649-e249-4f86-9377-abfcf7fc43dd from /var/lib/nova/compute_id
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.007 184518 WARNING nova.compute.manager [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Compute nodes ['81b80649-e249-4f86-9377-abfcf7fc43dd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 05 14:35:22 compute-0 sudo[185386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrdgkyujwvmnjuwxkqqfcvuilployri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623721.6216378-1283-44473529738811/AnsiballZ_systemd.py'
Jan 05 14:35:22 compute-0 sudo[185386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.055 184518 INFO nova.compute.manager [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.097 184518 WARNING nova.compute.manager [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.097 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.098 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.098 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.098 184518 DEBUG nova.compute.resource_tracker [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:35:22 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 05 14:35:22 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 05 14:35:22 compute-0 python3.9[185388]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:35:22 compute-0 systemd[1]: Stopping nova_compute container...
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.439 184518 WARNING nova.virt.libvirt.driver [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.441 184518 DEBUG nova.compute.resource_tracker [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6036MB free_disk=72.64863586425781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.442 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.442 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.460 184518 DEBUG oslo_concurrency.lockutils [None req-4c3ba683-77c2-4608-bac7-2f6349447fe1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.460 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.460 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:35:22 compute-0 nova_compute[184514]: 2026-01-05 14:35:22.461 184518 DEBUG oslo_concurrency.lockutils [None req-274d429e-bb73-4012-b2be-a01063da4512 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:35:22 compute-0 virtqemud[185095]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 05 14:35:22 compute-0 virtqemud[185095]: hostname: compute-0
Jan 05 14:35:22 compute-0 virtqemud[185095]: End of file while reading data: Input/output error
Jan 05 14:35:22 compute-0 systemd[1]: libpod-a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8.scope: Deactivated successfully.
Jan 05 14:35:22 compute-0 systemd[1]: libpod-a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8.scope: Consumed 2.927s CPU time.
Jan 05 14:35:22 compute-0 podman[185415]: 2026-01-05 14:35:22.842720323 +0000 UTC m=+0.432746413 container died a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202)
Jan 05 14:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8-userdata-shm.mount: Deactivated successfully.
Jan 05 14:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456-merged.mount: Deactivated successfully.
Jan 05 14:35:22 compute-0 podman[185415]: 2026-01-05 14:35:22.920730519 +0000 UTC m=+0.510756609 container cleanup a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 14:35:22 compute-0 podman[185415]: nova_compute
Jan 05 14:35:22 compute-0 podman[185444]: nova_compute
Jan 05 14:35:23 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 05 14:35:23 compute-0 systemd[1]: Stopped nova_compute container.
Jan 05 14:35:23 compute-0 systemd[1]: Starting nova_compute container...
Jan 05 14:35:23 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16101e3c6d5bd3e571933d16fcc906e0f57aaaafe5504b900dac1cb01520f456/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:23 compute-0 podman[185458]: 2026-01-05 14:35:23.159652845 +0000 UTC m=+0.127018020 container init a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute)
Jan 05 14:35:23 compute-0 podman[185458]: 2026-01-05 14:35:23.172652684 +0000 UTC m=+0.140017839 container start a5436cd4a4f091f12bd991a7df67531382e2daa92a3894a8c720f5d947bd25f8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 05 14:35:23 compute-0 podman[185458]: nova_compute
Jan 05 14:35:23 compute-0 nova_compute[185474]: + sudo -E kolla_set_configs
Jan 05 14:35:23 compute-0 systemd[1]: Started nova_compute container.
Jan 05 14:35:23 compute-0 sudo[185386]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Validating config file
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying service configuration files
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /etc/ceph
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Creating directory /etc/ceph
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /etc/ceph
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Writing out command to execute
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:23 compute-0 nova_compute[185474]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 14:35:23 compute-0 nova_compute[185474]: ++ cat /run_command
Jan 05 14:35:23 compute-0 nova_compute[185474]: + CMD=nova-compute
Jan 05 14:35:23 compute-0 nova_compute[185474]: + ARGS=
Jan 05 14:35:23 compute-0 nova_compute[185474]: + sudo kolla_copy_cacerts
Jan 05 14:35:23 compute-0 nova_compute[185474]: + [[ ! -n '' ]]
Jan 05 14:35:23 compute-0 nova_compute[185474]: + . kolla_extend_start
Jan 05 14:35:23 compute-0 nova_compute[185474]: + echo 'Running command: '\''nova-compute'\'''
Jan 05 14:35:23 compute-0 nova_compute[185474]: Running command: 'nova-compute'
Jan 05 14:35:23 compute-0 nova_compute[185474]: + umask 0022
Jan 05 14:35:23 compute-0 nova_compute[185474]: + exec nova-compute
Jan 05 14:35:23 compute-0 sudo[185635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcmkraeuhaxoydtphhcoxpuqwhkenerk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623723.5261886-1292-17667708488611/AnsiballZ_podman_container.py'
Jan 05 14:35:23 compute-0 sudo[185635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:24 compute-0 python3.9[185637]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 05 14:35:24 compute-0 systemd[1]: Started libpod-conmon-6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da.scope.
Jan 05 14:35:24 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f09c7d634ee463382ec6537718097332f4c6362ea87abace50044c88e0efe31/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f09c7d634ee463382ec6537718097332f4c6362ea87abace50044c88e0efe31/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f09c7d634ee463382ec6537718097332f4c6362ea87abace50044c88e0efe31/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 14:35:24 compute-0 podman[185664]: 2026-01-05 14:35:24.481683768 +0000 UTC m=+0.170261724 container init 6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Jan 05 14:35:24 compute-0 podman[185664]: 2026-01-05 14:35:24.493259499 +0000 UTC m=+0.181837445 container start 6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3)
Jan 05 14:35:24 compute-0 python3.9[185637]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Applying nova statedir ownership
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 05 14:35:24 compute-0 nova_compute_init[185686]: INFO:nova_statedir:Nova statedir ownership complete
Jan 05 14:35:24 compute-0 systemd[1]: libpod-6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da.scope: Deactivated successfully.
Jan 05 14:35:24 compute-0 podman[185700]: 2026-01-05 14:35:24.613120233 +0000 UTC m=+0.020728749 container died 6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 05 14:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da-userdata-shm.mount: Deactivated successfully.
Jan 05 14:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f09c7d634ee463382ec6537718097332f4c6362ea87abace50044c88e0efe31-merged.mount: Deactivated successfully.
Jan 05 14:35:24 compute-0 podman[185700]: 2026-01-05 14:35:24.86324968 +0000 UTC m=+0.270858226 container cleanup 6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS)
Jan 05 14:35:24 compute-0 systemd[1]: libpod-conmon-6614cf1261854ec6e93a1d9392cf5f4e7bee06416a44a6aafd99b1957b4e99da.scope: Deactivated successfully.
Jan 05 14:35:24 compute-0 sudo[185635]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.244 185478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.244 185478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.244 185478 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.244 185478 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.363 185478 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:35:25 compute-0 sshd-session[162357]: Connection closed by 192.168.122.30 port 51062
Jan 05 14:35:25 compute-0 sshd-session[162354]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:35:25 compute-0 systemd-logind[795]: Session 24 logged out. Waiting for processes to exit.
Jan 05 14:35:25 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 05 14:35:25 compute-0 systemd[1]: session-24.scope: Consumed 1min 53.562s CPU time.
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.388 185478 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.389 185478 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 05 14:35:25 compute-0 systemd-logind[795]: Removed session 24.
Jan 05 14:35:25 compute-0 nova_compute[185474]: 2026-01-05 14:35:25.896 185478 INFO nova.virt.driver [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.022 185478 INFO nova.compute.provider_config [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.037 185478 DEBUG oslo_concurrency.lockutils [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.038 185478 DEBUG oslo_concurrency.lockutils [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.038 185478 DEBUG oslo_concurrency.lockutils [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.038 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.039 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.039 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.039 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.039 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.039 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.040 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.041 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.041 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.041 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.041 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.041 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.042 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.043 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.043 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.043 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.043 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.043 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.044 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.044 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.044 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.044 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.044 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.045 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.045 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.045 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.045 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.045 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.046 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.046 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.046 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.046 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.046 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.047 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.047 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.047 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.047 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.047 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.048 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.049 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.049 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.049 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.049 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.049 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.050 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.051 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.052 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.052 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.052 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.052 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.052 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.053 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.053 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.053 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.053 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.053 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.054 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.055 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.055 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.055 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.055 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.055 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.056 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.057 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.057 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.057 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.057 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.057 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.058 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.058 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.058 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.058 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.058 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.059 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.060 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.060 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.060 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.060 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.060 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.061 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.062 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.062 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.062 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.062 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.062 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.063 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.064 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.064 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.064 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.064 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.064 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.065 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.066 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.066 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.066 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.066 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.066 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.067 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.067 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.067 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.067 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.067 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.068 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.069 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.070 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.071 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.072 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.073 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.074 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.075 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.076 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.077 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.078 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.079 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.080 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.081 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.082 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.083 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.084 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.085 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.086 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.087 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.088 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.089 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.090 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.091 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.092 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.093 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.094 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.095 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.096 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.097 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.098 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.099 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.100 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.101 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.102 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.103 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.104 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.105 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.106 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.107 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.108 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.109 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.110 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.111 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 WARNING oslo_config.cfg [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 05 14:35:26 compute-0 nova_compute[185474]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 05 14:35:26 compute-0 nova_compute[185474]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 05 14:35:26 compute-0 nova_compute[185474]: and ``live_migration_inbound_addr`` respectively.
Jan 05 14:35:26 compute-0 nova_compute[185474]: ).  Its value may be silently ignored in the future.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.112 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.113 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.114 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.115 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.116 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.117 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.118 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.119 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.120 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.121 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.122 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.123 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.124 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.125 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.126 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.127 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.128 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.129 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.130 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.131 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.132 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.133 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.134 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.135 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.136 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.137 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.138 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.139 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.140 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.141 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.142 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.143 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.144 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.145 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.146 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.147 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.148 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.149 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.150 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.151 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.152 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.153 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.154 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.155 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.156 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.157 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.158 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.159 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.160 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.161 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.162 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.163 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.164 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.165 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.166 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.167 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.168 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.169 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.170 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.171 185478 DEBUG oslo_service.service [None req-efa2d899-5953-4d92-9e58-23b1055a02c2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.172 185478 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.188 185478 INFO nova.virt.node [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Determined node identity 81b80649-e249-4f86-9377-abfcf7fc43dd from /var/lib/nova/compute_id
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.189 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.189 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.190 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.190 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.203 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2aa76e1a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.206 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2aa76e1a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.206 185478 INFO nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Connection event '1' reason 'None'
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.213 185478 INFO nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Libvirt host capabilities <capabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]: 
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <host>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <uuid>21aea88d-e46b-43ca-a852-7ac5c1bf4054</uuid>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <arch>x86_64</arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model>EPYC-Rome-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <vendor>AMD</vendor>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <microcode version='16777317'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <signature family='23' model='49' stepping='0'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='x2apic'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='tsc-deadline'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='osxsave'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='hypervisor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='tsc_adjust'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='spec-ctrl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='stibp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='arch-capabilities'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='cmp_legacy'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='topoext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='virt-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='lbrv'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='tsc-scale'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='vmcb-clean'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='pause-filter'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='pfthreshold'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='svme-addr-chk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='rdctl-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='skip-l1dfl-vmentry'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='mds-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature name='pschange-mc-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <pages unit='KiB' size='4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <pages unit='KiB' size='2048'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <pages unit='KiB' size='1048576'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <power_management>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <suspend_mem/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <suspend_disk/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <suspend_hybrid/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </power_management>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <iommu support='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <migration_features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <live/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <uri_transports>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <uri_transport>tcp</uri_transport>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <uri_transport>rdma</uri_transport>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </uri_transports>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </migration_features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <topology>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <cells num='1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <cell id='0'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <memory unit='KiB'>7864308</memory>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <pages unit='KiB' size='4'>1966077</pages>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <pages unit='KiB' size='2048'>0</pages>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <distances>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <sibling id='0' value='10'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           </distances>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           <cpus num='8'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:           </cpus>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         </cell>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </cells>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </topology>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <cache>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </cache>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <secmodel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model>selinux</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <doi>0</doi>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </secmodel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <secmodel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model>dac</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <doi>0</doi>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </secmodel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </host>
Jan 05 14:35:26 compute-0 nova_compute[185474]: 
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <guest>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <os_type>hvm</os_type>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <arch name='i686'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <wordsize>32</wordsize>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <domain type='qemu'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <domain type='kvm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <pae/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <nonpae/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <acpi default='on' toggle='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <apic default='on' toggle='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <cpuselection/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <deviceboot/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <disksnapshot default='on' toggle='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <externalSnapshot/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </guest>
Jan 05 14:35:26 compute-0 nova_compute[185474]: 
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <guest>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <os_type>hvm</os_type>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <arch name='x86_64'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <wordsize>64</wordsize>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <domain type='qemu'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <domain type='kvm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <acpi default='on' toggle='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <apic default='on' toggle='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <cpuselection/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <deviceboot/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <disksnapshot default='on' toggle='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <externalSnapshot/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </guest>
Jan 05 14:35:26 compute-0 nova_compute[185474]: 
Jan 05 14:35:26 compute-0 nova_compute[185474]: </capabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]: 
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.220 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.222 185478 DEBUG nova.virt.libvirt.volume.mount [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.225 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 05 14:35:26 compute-0 nova_compute[185474]: <domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <domain>kvm</domain>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <arch>i686</arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <vcpu max='4096'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <iothreads supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <os supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='firmware'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <loader supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>rom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pflash</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='readonly'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>yes</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='secure'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </loader>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </os>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='maximumMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <vendor>AMD</vendor>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='succor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='custom' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-128'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-256'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-512'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <memoryBacking supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='sourceType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>anonymous</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>memfd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </memoryBacking>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <disk supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='diskDevice'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>disk</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cdrom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>floppy</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>lun</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>fdc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>sata</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <graphics supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vnc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egl-headless</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </graphics>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <video supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='modelType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vga</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cirrus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>none</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>bochs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ramfb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </video>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hostdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='mode'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>subsystem</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='startupPolicy'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>mandatory</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>requisite</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>optional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='subsysType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pci</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='capsType'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='pciBackend'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hostdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <rng supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>random</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <filesystem supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='driverType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>path</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>handle</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtiofs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </filesystem>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <tpm supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-tis</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-crb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emulator</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>external</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendVersion'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>2.0</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </tpm>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <redirdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </redirdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <channel supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </channel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <crypto supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </crypto>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <interface supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>passt</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <panic supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>isa</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>hyperv</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </panic>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <console supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>null</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dev</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pipe</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stdio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>udp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tcp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu-vdagent</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </console>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <gic supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <genid supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backup supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <async-teardown supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <ps2 supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sev supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sgx supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hyperv supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='features'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>relaxed</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vapic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>spinlocks</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vpindex</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>runtime</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>synic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stimer</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reset</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vendor_id</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>frequencies</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reenlightenment</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tlbflush</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ipi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>avic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emsr_bitmap</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>xmm_input</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hyperv>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <launchSecurity supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='sectype'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tdx</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </launchSecurity>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]: </domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.232 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 05 14:35:26 compute-0 nova_compute[185474]: <domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <domain>kvm</domain>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <arch>i686</arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <vcpu max='240'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <iothreads supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <os supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='firmware'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <loader supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>rom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pflash</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='readonly'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>yes</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='secure'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </loader>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </os>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='maximumMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <vendor>AMD</vendor>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='succor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='custom' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-128'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-256'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-512'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <memoryBacking supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='sourceType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>anonymous</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>memfd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </memoryBacking>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <disk supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='diskDevice'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>disk</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cdrom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>floppy</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>lun</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ide</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>fdc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>sata</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <graphics supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vnc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egl-headless</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </graphics>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <video supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='modelType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vga</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cirrus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>none</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>bochs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ramfb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </video>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hostdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='mode'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>subsystem</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='startupPolicy'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>mandatory</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>requisite</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>optional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='subsysType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pci</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='capsType'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='pciBackend'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hostdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <rng supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>random</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <filesystem supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='driverType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>path</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>handle</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtiofs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </filesystem>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <tpm supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-tis</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-crb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emulator</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>external</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendVersion'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>2.0</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </tpm>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <redirdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </redirdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <channel supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </channel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <crypto supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </crypto>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <interface supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>passt</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <panic supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>isa</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>hyperv</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </panic>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <console supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>null</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dev</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pipe</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stdio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>udp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tcp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu-vdagent</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </console>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <gic supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <genid supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backup supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <async-teardown supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <ps2 supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sev supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sgx supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hyperv supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='features'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>relaxed</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vapic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>spinlocks</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vpindex</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>runtime</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>synic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stimer</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reset</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vendor_id</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>frequencies</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reenlightenment</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tlbflush</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ipi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>avic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emsr_bitmap</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>xmm_input</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hyperv>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <launchSecurity supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='sectype'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tdx</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </launchSecurity>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]: </domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.284 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.288 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 05 14:35:26 compute-0 nova_compute[185474]: <domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <domain>kvm</domain>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <arch>x86_64</arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <vcpu max='240'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <iothreads supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <os supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='firmware'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <loader supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>rom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pflash</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='readonly'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>yes</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='secure'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </loader>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </os>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='maximumMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <vendor>AMD</vendor>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='succor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='custom' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-128'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-256'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-512'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <memoryBacking supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='sourceType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>anonymous</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>memfd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </memoryBacking>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <disk supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='diskDevice'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>disk</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cdrom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>floppy</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>lun</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ide</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>fdc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>sata</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <graphics supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vnc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egl-headless</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </graphics>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <video supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='modelType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vga</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cirrus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>none</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>bochs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ramfb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </video>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hostdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='mode'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>subsystem</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='startupPolicy'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>mandatory</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>requisite</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>optional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='subsysType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pci</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='capsType'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='pciBackend'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hostdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <rng supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>random</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <filesystem supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='driverType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>path</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>handle</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtiofs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </filesystem>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <tpm supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-tis</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-crb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emulator</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>external</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendVersion'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>2.0</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </tpm>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <redirdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </redirdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <channel supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </channel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <crypto supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </crypto>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <interface supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>passt</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <panic supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>isa</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>hyperv</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </panic>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <console supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>null</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dev</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pipe</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stdio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>udp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tcp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu-vdagent</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </console>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <gic supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <genid supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backup supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <async-teardown supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <ps2 supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sev supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sgx supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hyperv supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='features'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>relaxed</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vapic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>spinlocks</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vpindex</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>runtime</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>synic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stimer</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reset</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vendor_id</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>frequencies</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reenlightenment</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tlbflush</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ipi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>avic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emsr_bitmap</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>xmm_input</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hyperv>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <launchSecurity supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='sectype'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tdx</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </launchSecurity>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]: </domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.365 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 05 14:35:26 compute-0 nova_compute[185474]: <domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <domain>kvm</domain>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <arch>x86_64</arch>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <vcpu max='4096'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <iothreads supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <os supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='firmware'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>efi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <loader supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>rom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pflash</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='readonly'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>yes</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='secure'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>yes</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>no</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </loader>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </os>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-passthrough' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='hostPassthroughMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='maximum' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='maximumMigratable'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>on</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>off</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='host-model' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <vendor>AMD</vendor>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='x2apic'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='hypervisor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='stibp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='overflow-recov'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='succor'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lbrv'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='tsc-scale'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='flushbyasid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pause-filter'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='pfthreshold'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <feature policy='disable' name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <mode name='custom' supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Broadwell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Cooperlake-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Denverton-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Dhyana-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='auto-ibrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Milan-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amd-psfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='no-nested-data-bp'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='null-sel-clr-base'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='stibp-always-on'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-Rome-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='EPYC-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='GraniteRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-128'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-256'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx10-512'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='prefetchiti'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Haswell-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v6'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Icelake-Server-v7'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='IvyBridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='KnightsMill-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4fmaps'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-4vnniw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512er'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512pf'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G4-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Opteron_G5-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fma4'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tbm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xop'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SapphireRapids-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='amx-tile'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-bf16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-fp16'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512-vpopcntdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bitalg'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vbmi2'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrc'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fzrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='la57'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='taa-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='tsx-ldtrk'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xfd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='SierraForest-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ifma'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-ne-convert'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx-vnni-int8'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='bus-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cmpccxadd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fbsdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='fsrs'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ibrs-all'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mcdt-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pbrsb-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='psdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='sbdr-ssdp-no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='serialize'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vaes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='vpclmulqdq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Client-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='hle'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='rtm'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Skylake-Server-v5'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512bw'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512cd'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512dq'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512f'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='avx512vl'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='invpcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pcid'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='pku'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='mpx'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v2'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v3'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='core-capability'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='split-lock-detect'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='Snowridge-v4'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='cldemote'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='erms'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='gfni'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdir64b'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='movdiri'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='xsaves'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='athlon-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='core2duo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='coreduo-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='n270-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='ss'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <blockers model='phenom-v1'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnow'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <feature name='3dnowext'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </blockers>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </mode>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <memoryBacking supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <enum name='sourceType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>anonymous</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <value>memfd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </memoryBacking>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <disk supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='diskDevice'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>disk</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cdrom</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>floppy</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>lun</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>fdc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>sata</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <graphics supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vnc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egl-headless</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </graphics>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <video supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='modelType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vga</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>cirrus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>none</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>bochs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ramfb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </video>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hostdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='mode'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>subsystem</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='startupPolicy'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>mandatory</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>requisite</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>optional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='subsysType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pci</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>scsi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='capsType'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='pciBackend'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hostdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <rng supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtio-non-transitional</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>random</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>egd</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <filesystem supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='driverType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>path</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>handle</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>virtiofs</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </filesystem>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <tpm supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-tis</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tpm-crb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emulator</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>external</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendVersion'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>2.0</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </tpm>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <redirdev supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='bus'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>usb</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </redirdev>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <channel supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </channel>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <crypto supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendModel'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>builtin</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </crypto>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <interface supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='backendType'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>default</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>passt</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <panic supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='model'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>isa</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>hyperv</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </panic>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <console supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='type'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>null</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vc</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pty</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dev</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>file</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>pipe</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stdio</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>udp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tcp</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>unix</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>qemu-vdagent</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>dbus</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </console>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   <features>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <gic supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <vmcoreinfo supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <genid supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backingStoreInput supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <backup supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <async-teardown supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <ps2 supported='yes'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sev supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <sgx supported='no'/>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <hyperv supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='features'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>relaxed</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vapic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>spinlocks</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vpindex</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>runtime</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>synic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>stimer</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reset</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>vendor_id</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>frequencies</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>reenlightenment</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tlbflush</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>ipi</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>avic</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>emsr_bitmap</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>xmm_input</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <spinlocks>4095</spinlocks>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <stimer_direct>on</stimer_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </defaults>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </hyperv>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     <launchSecurity supported='yes'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       <enum name='sectype'>
Jan 05 14:35:26 compute-0 nova_compute[185474]:         <value>tdx</value>
Jan 05 14:35:26 compute-0 nova_compute[185474]:       </enum>
Jan 05 14:35:26 compute-0 nova_compute[185474]:     </launchSecurity>
Jan 05 14:35:26 compute-0 nova_compute[185474]:   </features>
Jan 05 14:35:26 compute-0 nova_compute[185474]: </domainCapabilities>
Jan 05 14:35:26 compute-0 nova_compute[185474]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.428 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.429 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.429 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.429 185478 INFO nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Secure Boot support detected
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.431 185478 INFO nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.432 185478 INFO nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.443 185478 DEBUG nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.474 185478 INFO nova.virt.node [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Determined node identity 81b80649-e249-4f86-9377-abfcf7fc43dd from /var/lib/nova/compute_id
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.500 185478 WARNING nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Compute nodes ['81b80649-e249-4f86-9377-abfcf7fc43dd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.523 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.538 185478 WARNING nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.538 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.539 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.539 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.539 185478 DEBUG nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.700 185478 WARNING nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.702 185478 DEBUG nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6006MB free_disk=72.64751815795898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.702 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.702 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.716 185478 WARNING nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] No compute node record for compute-0.ctlplane.example.com:81b80649-e249-4f86-9377-abfcf7fc43dd: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 81b80649-e249-4f86-9377-abfcf7fc43dd could not be found.
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.732 185478 INFO nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 81b80649-e249-4f86-9377-abfcf7fc43dd
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.813 185478 DEBUG nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:35:26 compute-0 nova_compute[185474]: 2026-01-05 14:35:26.813 185478 DEBUG nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.182 185478 INFO nova.scheduler.client.report [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [req-9a53f072-a5d1-482b-8721-28fb26db85a4] Created resource provider record via placement API for resource provider with UUID 81b80649-e249-4f86-9377-abfcf7fc43dd and name compute-0.ctlplane.example.com.
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.627 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 05 14:35:28 compute-0 nova_compute[185474]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.628 185478 INFO nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] kernel doesn't support AMD SEV
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.629 185478 DEBUG nova.compute.provider_tree [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.629 185478 DEBUG nova.virt.libvirt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.679 185478 DEBUG nova.scheduler.client.report [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Updated inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.679 185478 DEBUG nova.compute.provider_tree [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Updating resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.680 185478 DEBUG nova.compute.provider_tree [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.778 185478 DEBUG nova.compute.provider_tree [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Updating resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.801 185478 DEBUG nova.compute.resource_tracker [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.801 185478 DEBUG oslo_concurrency.lockutils [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.802 185478 DEBUG nova.service [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.916 185478 DEBUG nova.service [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 05 14:35:28 compute-0 nova_compute[185474]: 2026-01-05 14:35:28.916 185478 DEBUG nova.servicegroup.drivers.db [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 05 14:35:32 compute-0 sshd-session[185778]: Accepted publickey for zuul from 192.168.122.30 port 53954 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:35:32 compute-0 systemd-logind[795]: New session 26 of user zuul.
Jan 05 14:35:32 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 05 14:35:32 compute-0 sshd-session[185778]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:35:33 compute-0 python3.9[185931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:35:34 compute-0 sudo[186085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlrcbskxpkltmrqetjghldiksvdhywbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623734.2531145-36-237865238449687/AnsiballZ_systemd_service.py'
Jan 05 14:35:34 compute-0 sudo[186085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:35 compute-0 python3.9[186087]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:35:35 compute-0 systemd[1]: Reloading.
Jan 05 14:35:35 compute-0 systemd-rc-local-generator[186115]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:35:35 compute-0 systemd-sysv-generator[186118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:35:35 compute-0 sudo[186085]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:36 compute-0 python3.9[186273]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:35:36 compute-0 network[186290]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:35:36 compute-0 network[186291]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:35:36 compute-0 network[186292]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:35:41 compute-0 sudo[186562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewtvxstwrgqmejklympbtizknrpygsbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623741.157268-55-82063542835849/AnsiballZ_systemd_service.py'
Jan 05 14:35:41 compute-0 sudo[186562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:41 compute-0 python3.9[186564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:35:41 compute-0 sudo[186562]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:42 compute-0 sudo[186715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmeszlghctonjucfkufnloojcruutrrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623742.2579043-65-269260505654758/AnsiballZ_file.py'
Jan 05 14:35:42 compute-0 sudo[186715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:42 compute-0 python3.9[186717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:42 compute-0 sudo[186715]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:35:42 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:35:43 compute-0 sudo[186868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwzmkcqezqnyxjgoyeaaahmyfccttwky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623743.2076523-73-245064117053264/AnsiballZ_file.py'
Jan 05 14:35:43 compute-0 sudo[186868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:43 compute-0 python3.9[186870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:43 compute-0 sudo[186868]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:44 compute-0 sudo[187037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyiylhccwcrmdyoprgphedlfcfcmqgex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623744.0741014-82-144868001930488/AnsiballZ_command.py'
Jan 05 14:35:44 compute-0 sudo[187037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:44 compute-0 podman[186994]: 2026-01-05 14:35:44.618881025 +0000 UTC m=+0.120191591 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:35:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:35:44.785 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:35:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:35:44.786 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:35:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:35:44.786 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:35:44 compute-0 python3.9[187044]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:35:44 compute-0 sudo[187037]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:45 compute-0 python3.9[187200]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:35:46 compute-0 sudo[187350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agsvthuprvojklthtkqfzebzuxeuvsyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623746.1341994-100-257424863506830/AnsiballZ_systemd_service.py'
Jan 05 14:35:46 compute-0 sudo[187350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:46 compute-0 python3.9[187352]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:35:46 compute-0 systemd[1]: Reloading.
Jan 05 14:35:46 compute-0 systemd-rc-local-generator[187380]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:35:46 compute-0 systemd-sysv-generator[187383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:35:47 compute-0 sudo[187350]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:47 compute-0 sudo[187538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxigvqumwdovyyupqhfodukovsrxgljh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623747.358191-108-250509980154561/AnsiballZ_command.py'
Jan 05 14:35:47 compute-0 sudo[187538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:47 compute-0 python3.9[187540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:35:48 compute-0 sudo[187538]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:48 compute-0 sudo[187691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqkgsywznpspyqfrcuishcxkldkptylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623748.3045535-117-181132181164100/AnsiballZ_file.py'
Jan 05 14:35:48 compute-0 sudo[187691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:48 compute-0 python3.9[187693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:35:48 compute-0 sudo[187691]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:49 compute-0 podman[187817]: 2026-01-05 14:35:49.615005175 +0000 UTC m=+0.083353406 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:35:49 compute-0 python3.9[187853]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:35:50 compute-0 sudo[188014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsdnrxorvqbibgzgdegedjuoacuroypk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623750.0405123-133-196211089892442/AnsiballZ_group.py'
Jan 05 14:35:50 compute-0 sudo[188014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:50 compute-0 python3.9[188016]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 05 14:35:50 compute-0 sudo[188014]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:51 compute-0 sudo[188166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dauqhqrtvmhcyfwhaptbexonpjxlwqul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623751.2719593-144-211509819826493/AnsiballZ_getent.py'
Jan 05 14:35:51 compute-0 sudo[188166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:52 compute-0 python3.9[188168]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 05 14:35:52 compute-0 sudo[188166]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:52 compute-0 sudo[188319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvmsasipliiffryunoeqbznfifofxnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623752.240582-152-135071194659362/AnsiballZ_group.py'
Jan 05 14:35:52 compute-0 sudo[188319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:52 compute-0 python3.9[188321]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 14:35:52 compute-0 groupadd[188322]: group added to /etc/group: name=ceilometer, GID=42405
Jan 05 14:35:52 compute-0 groupadd[188322]: group added to /etc/gshadow: name=ceilometer
Jan 05 14:35:52 compute-0 groupadd[188322]: new group: name=ceilometer, GID=42405
Jan 05 14:35:52 compute-0 sudo[188319]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:53 compute-0 sudo[188477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwskrluitqzjftojqldekzqmxzrdljii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623753.1161098-160-179077163836907/AnsiballZ_user.py'
Jan 05 14:35:53 compute-0 sudo[188477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:35:53 compute-0 python3.9[188479]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 14:35:53 compute-0 useradd[188481]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 14:35:53 compute-0 useradd[188481]: add 'ceilometer' to group 'libvirt'
Jan 05 14:35:53 compute-0 useradd[188481]: add 'ceilometer' to shadow group 'libvirt'
Jan 05 14:35:54 compute-0 sudo[188477]: pam_unix(sudo:session): session closed for user root
Jan 05 14:35:55 compute-0 sshd-session[188536]: Invalid user solv from 165.22.168.95 port 36204
Jan 05 14:35:55 compute-0 sshd-session[188536]: Connection closed by invalid user solv 165.22.168.95 port 36204 [preauth]
Jan 05 14:35:55 compute-0 python3.9[188639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:35:56 compute-0 python3.9[188760]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623754.8928642-186-21481775778437/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:57 compute-0 python3.9[188910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:35:57 compute-0 python3.9[189031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623756.4730105-186-99684571249249/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:58 compute-0 python3.9[189181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:35:59 compute-0 python3.9[189302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623757.9254842-186-143411723789784/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:35:59 compute-0 python3.9[189452]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:00 compute-0 python3.9[189604]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:01 compute-0 python3.9[189756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:02 compute-0 python3.9[189877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623760.915606-245-267282074653956/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:02 compute-0 python3.9[190027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:03 compute-0 python3.9[190148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623762.3467598-245-114405819129471/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:04 compute-0 python3.9[190298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:04 compute-0 python3.9[190419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623763.7410421-274-167459545504673/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:05 compute-0 python3.9[190569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:06 compute-0 python3.9[190690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623765.263247-290-205689678165566/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:06 compute-0 nova_compute[185474]: 2026-01-05 14:36:06.919 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:06 compute-0 nova_compute[185474]: 2026-01-05 14:36:06.949 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:07 compute-0 python3.9[190840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:07 compute-0 python3.9[190961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623766.7429955-305-187897935175318/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:08 compute-0 python3.9[191111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:09 compute-0 python3.9[191232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623768.2359917-320-35794861318299/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:10 compute-0 sudo[191382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyntzxszioljectrbslbferrznohpxfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623769.7452803-335-166748032093384/AnsiballZ_file.py'
Jan 05 14:36:10 compute-0 sudo[191382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:10 compute-0 python3.9[191384]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:10 compute-0 sudo[191382]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:10 compute-0 sudo[191534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvqemgztuylhktgbdkfoaplxvnupcsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623770.547234-343-169219690333284/AnsiballZ_file.py'
Jan 05 14:36:10 compute-0 sudo[191534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:11 compute-0 python3.9[191536]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:11 compute-0 sudo[191534]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:11 compute-0 python3.9[191686]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:12 compute-0 python3.9[191838]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:13 compute-0 python3.9[191990]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:14 compute-0 sudo[192142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueqjmvdqirvbhzyxqjjoapwuewycctsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623773.947107-375-185258774845462/AnsiballZ_file.py'
Jan 05 14:36:14 compute-0 sudo[192142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:14 compute-0 python3.9[192144]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:14 compute-0 sudo[192142]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:15 compute-0 sudo[192313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sufyocyktbrlcwockufjihppkoyvqqxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623774.7434216-383-247138549711031/AnsiballZ_systemd_service.py'
Jan 05 14:36:15 compute-0 sudo[192313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:15 compute-0 podman[192268]: 2026-01-05 14:36:15.158731558 +0000 UTC m=+0.113923316 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 14:36:15 compute-0 python3.9[192320]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:36:15 compute-0 systemd[1]: Reloading.
Jan 05 14:36:15 compute-0 systemd-rc-local-generator[192350]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:36:15 compute-0 systemd-sysv-generator[192353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:36:15 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 05 14:36:15 compute-0 sudo[192313]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:16 compute-0 sudo[192510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aymvtxvhzuzrtqulxrcmjmesqbmrthrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/AnsiballZ_stat.py'
Jan 05 14:36:16 compute-0 sudo[192510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:16 compute-0 python3.9[192512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:16 compute-0 sudo[192510]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:17 compute-0 sudo[192633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxxyeytfxhkcolawetkbusetqyxwdfgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/AnsiballZ_copy.py'
Jan 05 14:36:17 compute-0 sudo[192633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:17 compute-0 python3.9[192635]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:17 compute-0 sudo[192633]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:17 compute-0 sudo[192709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdimlpnwnxrxothshpnwgrxtfhsczwpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/AnsiballZ_stat.py'
Jan 05 14:36:17 compute-0 sudo[192709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:18 compute-0 python3.9[192711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:18 compute-0 sudo[192709]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:18 compute-0 sudo[192832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqqslpwavwquykgvzyfljslgqmmlopo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/AnsiballZ_copy.py'
Jan 05 14:36:18 compute-0 sudo[192832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:18 compute-0 python3.9[192834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623776.2454693-392-100388151751368/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:18 compute-0 sudo[192832]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:19 compute-0 sudo[192996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwrvdemrowgetlxsjktvgpihbzjypql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623779.5127766-424-139924767732550/AnsiballZ_file.py'
Jan 05 14:36:19 compute-0 sudo[192996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:19 compute-0 podman[192958]: 2026-01-05 14:36:19.949269512 +0000 UTC m=+0.097307330 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 05 14:36:20 compute-0 python3.9[193002]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:20 compute-0 sudo[192996]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:20 compute-0 sudo[193154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykxmprgcireedbnaryekjzbsobdscbxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623780.3870163-432-191643612971956/AnsiballZ_file.py'
Jan 05 14:36:20 compute-0 sudo[193154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:20 compute-0 python3.9[193156]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:20 compute-0 sudo[193154]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:21 compute-0 sudo[193306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkykozpwpcsbouhwmvmayhjfswrzgduh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623781.2450082-440-28366831228383/AnsiballZ_stat.py'
Jan 05 14:36:21 compute-0 sudo[193306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:21 compute-0 python3.9[193308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:21 compute-0 sudo[193306]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:22 compute-0 sudo[193429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jssrmityvupyghainxokojelxvgyocxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623781.2450082-440-28366831228383/AnsiballZ_copy.py'
Jan 05 14:36:22 compute-0 sudo[193429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:22 compute-0 python3.9[193431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623781.2450082-440-28366831228383/.source.json _original_basename=.rdwxam__ follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:22 compute-0 sudo[193429]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:23 compute-0 python3.9[193581]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.402 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.422 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.423 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.424 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.425 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.425 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.426 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.427 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.427 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.428 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.463 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.464 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.464 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.465 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.688 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.690 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5982MB free_disk=72.65156555175781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.690 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.690 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.783 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.783 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:36:25 compute-0 sudo[194002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mooyytxendnaoceeybbldyriwyfihgmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623785.2585876-480-52798361137188/AnsiballZ_container_config_data.py'
Jan 05 14:36:25 compute-0 sudo[194002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.814 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.830 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.832 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:36:25 compute-0 nova_compute[185474]: 2026-01-05 14:36:25.832 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:36:26 compute-0 python3.9[194004]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 05 14:36:26 compute-0 sudo[194002]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:26 compute-0 sudo[194154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvoeztukcactaygxbixpvfnzbpjgiun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623786.423648-491-213025312421965/AnsiballZ_container_config_hash.py'
Jan 05 14:36:26 compute-0 sudo[194154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:27 compute-0 python3.9[194156]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:36:27 compute-0 sudo[194154]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:28 compute-0 sudo[194306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtljyxjburyuyiyvcixkjdaravdurvmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623787.5769274-500-112355119988146/AnsiballZ_podman_container_info.py'
Jan 05 14:36:28 compute-0 sudo[194306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:28 compute-0 python3.9[194308]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:36:28 compute-0 sudo[194306]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:29 compute-0 sudo[194484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxdnkuttyrxcdfkwbtpsshwqdmepqnyd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623789.2283-513-72496597088619/AnsiballZ_edpm_container_manage.py'
Jan 05 14:36:29 compute-0 sudo[194484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:30 compute-0 python3[194486]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:36:30 compute-0 podman[194521]: 2026-01-05 14:36:30.232378937 +0000 UTC m=+0.067200003 container create 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 14:36:30 compute-0 podman[194521]: 2026-01-05 14:36:30.195431298 +0000 UTC m=+0.030252394 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 05 14:36:30 compute-0 python3[194486]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 05 14:36:30 compute-0 sudo[194484]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:31 compute-0 sudo[194709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxcpxqpxfsjpobilfwuksllhbneeihoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623790.6552489-521-872458076178/AnsiballZ_stat.py'
Jan 05 14:36:31 compute-0 sudo[194709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:31 compute-0 python3.9[194711]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:31 compute-0 sudo[194709]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:31 compute-0 sudo[194863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txsupaizxxykatsjpafampuxtrdkuvij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623791.6156938-530-257943106979789/AnsiballZ_file.py'
Jan 05 14:36:32 compute-0 sudo[194863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:32 compute-0 python3.9[194865]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:32 compute-0 sudo[194863]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:32 compute-0 sudo[194939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaxqkigzizoqiwezqmaonfstpesjzjru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623791.6156938-530-257943106979789/AnsiballZ_stat.py'
Jan 05 14:36:32 compute-0 sudo[194939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:32 compute-0 python3.9[194941]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:32 compute-0 sudo[194939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:33 compute-0 sudo[195090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzugrhsqxitvrknybrmkkhowcqktxeaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623792.8266134-530-169890517796433/AnsiballZ_copy.py'
Jan 05 14:36:33 compute-0 sudo[195090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:33 compute-0 python3.9[195092]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623792.8266134-530-169890517796433/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:33 compute-0 sudo[195090]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:34 compute-0 sudo[195166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawxnlafnotunfkjrajfnhqernsrxats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623792.8266134-530-169890517796433/AnsiballZ_systemd.py'
Jan 05 14:36:34 compute-0 sudo[195166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:34 compute-0 python3.9[195168]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:36:34 compute-0 systemd[1]: Reloading.
Jan 05 14:36:34 compute-0 systemd-rc-local-generator[195197]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:36:34 compute-0 systemd-sysv-generator[195203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:36:34 compute-0 sudo[195166]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:35 compute-0 sudo[195280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrwtglotganrvyyiwwbgvkoblnmcgbtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623792.8266134-530-169890517796433/AnsiballZ_systemd.py'
Jan 05 14:36:35 compute-0 sudo[195280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:35 compute-0 python3.9[195282]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:36:35 compute-0 systemd[1]: Reloading.
Jan 05 14:36:35 compute-0 systemd-rc-local-generator[195306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:36:35 compute-0 systemd-sysv-generator[195314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:36:36 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 05 14:36:36 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4002c434aaf2545c891cf284b45caef1bfa2ba9f017c1d19eb1ded5d59f20743/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4002c434aaf2545c891cf284b45caef1bfa2ba9f017c1d19eb1ded5d59f20743/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4002c434aaf2545c891cf284b45caef1bfa2ba9f017c1d19eb1ded5d59f20743/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4002c434aaf2545c891cf284b45caef1bfa2ba9f017c1d19eb1ded5d59f20743/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.
Jan 05 14:36:36 compute-0 podman[195321]: 2026-01-05 14:36:36.296742283 +0000 UTC m=+0.184234180 container init 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + sudo -E kolla_set_configs
Jan 05 14:36:36 compute-0 podman[195321]: 2026-01-05 14:36:36.330278322 +0000 UTC m=+0.217770159 container start 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 14:36:36 compute-0 podman[195321]: ceilometer_agent_compute
Jan 05 14:36:36 compute-0 sudo[195343]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: sudo: unable to send audit message: Operation not permitted
Jan 05 14:36:36 compute-0 sudo[195343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:36:36 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 05 14:36:36 compute-0 sudo[195280]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Validating config file
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Copying service configuration files
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: INFO:__main__:Writing out command to execute
Jan 05 14:36:36 compute-0 sudo[195343]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: ++ cat /run_command
Jan 05 14:36:36 compute-0 podman[195344]: 2026-01-05 14:36:36.427254342 +0000 UTC m=+0.076753132 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + ARGS=
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + sudo kolla_copy_cacerts
Jan 05 14:36:36 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:36:36 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Failed with result 'exit-code'.
Jan 05 14:36:36 compute-0 sudo[195366]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: sudo: unable to send audit message: Operation not permitted
Jan 05 14:36:36 compute-0 sudo[195366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:36:36 compute-0 sudo[195366]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + [[ ! -n '' ]]
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + . kolla_extend_start
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + umask 0022
Jan 05 14:36:36 compute-0 ceilometer_agent_compute[195337]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.279 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.279 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.279 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.279 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.280 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.281 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.282 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.283 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.284 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.285 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.286 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.287 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.288 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.289 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.290 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.291 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.292 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.315 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.316 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.316 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.317 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.317 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.317 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.317 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.318 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.318 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.318 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.318 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.318 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.319 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.319 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.319 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.319 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.319 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.320 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.321 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.322 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.323 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.324 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.325 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.326 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.327 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.328 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.329 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.330 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.331 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.332 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.333 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.334 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.335 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.336 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.337 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.338 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.341 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.343 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.343 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 05 14:36:37 compute-0 python3.9[195518]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.568 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.576 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.577 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.577 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.699 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.700 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.701 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.702 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.703 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.704 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.705 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.706 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.707 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.708 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.709 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.710 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.711 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.712 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.713 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.713 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.713 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.713 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.717 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.744 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.745 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.745 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.746 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.747 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.747 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.752 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52a60f0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.764 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:36:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:36:38 compute-0 sudo[195681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxkonsavwnhupbsinixfrzbccmyfmepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623797.8824968-571-17607277412865/AnsiballZ_stat.py'
Jan 05 14:36:38 compute-0 sudo[195681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:38 compute-0 python3.9[195683]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:38 compute-0 sudo[195681]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:39 compute-0 sudo[195806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-couqmzbyxpndyppzqypeqdaobhebnhih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623797.8824968-571-17607277412865/AnsiballZ_copy.py'
Jan 05 14:36:39 compute-0 sudo[195806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:39 compute-0 python3.9[195808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623797.8824968-571-17607277412865/.source.yaml _original_basename=.ktdwlfde follow=False checksum=10c6a1209099307baae54d38bb2e33dc2d3e787b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:39 compute-0 sudo[195806]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:39 compute-0 sudo[195958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cievhajqgdaowsositvlzjlizylixfgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623799.4837887-586-106276816542747/AnsiballZ_stat.py'
Jan 05 14:36:39 compute-0 sudo[195958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:40 compute-0 python3.9[195960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:40 compute-0 sudo[195958]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:40 compute-0 sudo[196081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-becdntkioomeynaeeecwyvlzmcdbgzee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623799.4837887-586-106276816542747/AnsiballZ_copy.py'
Jan 05 14:36:40 compute-0 sudo[196081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:40 compute-0 python3.9[196083]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623799.4837887-586-106276816542747/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:40 compute-0 sudo[196081]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:41 compute-0 sudo[196233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcietxajmecurngwbhkqoyifkssshkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623801.4583714-607-123494074216154/AnsiballZ_file.py'
Jan 05 14:36:41 compute-0 sudo[196233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:42 compute-0 python3.9[196235]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:42 compute-0 sudo[196233]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:42 compute-0 sudo[196385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbfisiycocrfeaizhreyjzsnugxbrthh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623802.365047-615-153808605163950/AnsiballZ_file.py'
Jan 05 14:36:42 compute-0 sudo[196385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:42 compute-0 python3.9[196387]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:36:42 compute-0 sudo[196385]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:43 compute-0 sudo[196537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsfltopybmfpswmuddwvwafgikkuhhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623803.2010384-623-71134242563086/AnsiballZ_stat.py'
Jan 05 14:36:43 compute-0 sudo[196537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:43 compute-0 python3.9[196539]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:43 compute-0 sudo[196537]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:44 compute-0 sudo[196615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idbzklwhsmdbpiealgzlwpvwbrdjgsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623803.2010384-623-71134242563086/AnsiballZ_file.py'
Jan 05 14:36:44 compute-0 sudo[196615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:44 compute-0 python3.9[196617]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.hwudqrak recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:44 compute-0 sudo[196615]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:36:44.787 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:36:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:36:44.788 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:36:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:36:44.788 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:36:45 compute-0 python3.9[196767]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:45 compute-0 podman[196837]: 2026-01-05 14:36:45.651745433 +0000 UTC m=+0.135109200 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 05 14:36:47 compute-0 sudo[197215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nznkhzvnuopptphqhebegivudmhhnpgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623807.1283662-660-157547586924220/AnsiballZ_container_config_data.py'
Jan 05 14:36:47 compute-0 sudo[197215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:47 compute-0 python3.9[197217]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 05 14:36:47 compute-0 sudo[197215]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:48 compute-0 sudo[197367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rehovodvqecoipysrlbkisjhmqkytlwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623808.205831-671-12641424920818/AnsiballZ_container_config_hash.py'
Jan 05 14:36:48 compute-0 sudo[197367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:48 compute-0 python3.9[197369]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:36:48 compute-0 sudo[197367]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:49 compute-0 sudo[197519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvcxzjnagkhxmvtemxjcjetlttfwjey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623809.1997359-680-5943739010220/AnsiballZ_podman_container_info.py'
Jan 05 14:36:49 compute-0 sudo[197519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:49 compute-0 python3.9[197521]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:36:50 compute-0 sudo[197519]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:50 compute-0 podman[197572]: 2026-01-05 14:36:50.585738039 +0000 UTC m=+0.076795757 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 14:36:51 compute-0 sudo[197716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azhfpzjleoopgkduzpwxapoqnauiclbd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623810.771915-693-110589053967844/AnsiballZ_edpm_container_manage.py'
Jan 05 14:36:51 compute-0 sudo[197716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:51 compute-0 python3[197718]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:36:51 compute-0 podman[197751]: 2026-01-05 14:36:51.685822348 +0000 UTC m=+0.078210355 container create fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:36:51 compute-0 podman[197751]: 2026-01-05 14:36:51.650780027 +0000 UTC m=+0.043168094 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 05 14:36:51 compute-0 python3[197718]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 05 14:36:51 compute-0 sudo[197716]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:52 compute-0 sudo[197939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdyvxqpmkdnfwhzpyvbavbcamlkttsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623812.153438-701-35951879189480/AnsiballZ_stat.py'
Jan 05 14:36:52 compute-0 sudo[197939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:52 compute-0 python3.9[197941]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:52 compute-0 sudo[197939]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:53 compute-0 sudo[198093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtkayripjmkppjqwboognnlornxhkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623813.129869-710-194488431188172/AnsiballZ_file.py'
Jan 05 14:36:53 compute-0 sudo[198093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:53 compute-0 python3.9[198095]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:53 compute-0 sudo[198093]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:53 compute-0 sudo[198169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxzhxoksesrzjtgyhazeoycukulguwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623813.129869-710-194488431188172/AnsiballZ_stat.py'
Jan 05 14:36:53 compute-0 sudo[198169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:54 compute-0 python3.9[198171]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:36:54 compute-0 sudo[198169]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:54 compute-0 sudo[198320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiuylldjbedgibmkdtdwcykdpybbwtfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623814.2966468-710-17522409974206/AnsiballZ_copy.py'
Jan 05 14:36:54 compute-0 sudo[198320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:55 compute-0 python3.9[198322]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623814.2966468-710-17522409974206/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:36:55 compute-0 sudo[198320]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:55 compute-0 sudo[198396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxswuinlhysizocmkfgxyspqteeafjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623814.2966468-710-17522409974206/AnsiballZ_systemd.py'
Jan 05 14:36:55 compute-0 sudo[198396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:55 compute-0 python3.9[198398]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:36:55 compute-0 systemd[1]: Reloading.
Jan 05 14:36:55 compute-0 systemd-rc-local-generator[198424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:36:55 compute-0 systemd-sysv-generator[198428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:36:56 compute-0 sudo[198396]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:56 compute-0 sudo[198507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpazkcvgerkmyfhvzwxlxhotlipucotx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623814.2966468-710-17522409974206/AnsiballZ_systemd.py'
Jan 05 14:36:56 compute-0 sudo[198507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:56 compute-0 python3.9[198509]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:36:56 compute-0 systemd[1]: Reloading.
Jan 05 14:36:57 compute-0 systemd-rc-local-generator[198534]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:36:57 compute-0 systemd-sysv-generator[198542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:36:57 compute-0 systemd[1]: Starting node_exporter container...
Jan 05 14:36:57 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d496c24afc4d3668795ebfec8530cef607a13a73f820b7e522c8da61beb147/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d496c24afc4d3668795ebfec8530cef607a13a73f820b7e522c8da61beb147/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:36:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.
Jan 05 14:36:57 compute-0 podman[198548]: 2026-01-05 14:36:57.379958791 +0000 UTC m=+0.155175977 container init fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.400Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.400Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.400Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.401Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.401Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.402Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.402Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.402Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.402Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.402Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=arp
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=bcache
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=bonding
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=cpu
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=edac
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=filefd
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=netclass
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=netdev
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=netstat
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=nfs
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=nvme
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=softnet
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=systemd
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=xfs
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.403Z caller=node_exporter.go:117 level=info collector=zfs
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.404Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 05 14:36:57 compute-0 node_exporter[198563]: ts=2026-01-05T14:36:57.405Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 05 14:36:57 compute-0 podman[198548]: 2026-01-05 14:36:57.410992754 +0000 UTC m=+0.186209900 container start fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:36:57 compute-0 podman[198548]: node_exporter
Jan 05 14:36:57 compute-0 systemd[1]: Started node_exporter container.
Jan 05 14:36:57 compute-0 sudo[198507]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:57 compute-0 podman[198572]: 2026-01-05 14:36:57.508277166 +0000 UTC m=+0.080009844 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:36:58 compute-0 python3.9[198743]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:36:59 compute-0 sudo[198893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxjgneiwjclplslkruqwxneevobuklsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623818.8467486-751-127003932949222/AnsiballZ_stat.py'
Jan 05 14:36:59 compute-0 sudo[198893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:36:59 compute-0 python3.9[198895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:36:59 compute-0 sudo[198893]: pam_unix(sudo:session): session closed for user root
Jan 05 14:36:59 compute-0 sudo[199018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyzuxxfolkdjvtssgsqqqwlkaoeuksfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623818.8467486-751-127003932949222/AnsiballZ_copy.py'
Jan 05 14:36:59 compute-0 sudo[199018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:00 compute-0 python3.9[199020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623818.8467486-751-127003932949222/.source.yaml _original_basename=.4wnncg9v follow=False checksum=c8d3c641b2aa7e793d6a1d51793ba311335c8123 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:00 compute-0 sudo[199018]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:00 compute-0 sudo[199170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqnbavorjzdapxivumyvtiietatwygjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623820.344136-766-107557389295945/AnsiballZ_stat.py'
Jan 05 14:37:00 compute-0 sudo[199170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:00 compute-0 python3.9[199172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:00 compute-0 sudo[199170]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:01 compute-0 sudo[199293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsrwnrwyvkpejhwddynphprpuxgkocml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623820.344136-766-107557389295945/AnsiballZ_copy.py'
Jan 05 14:37:01 compute-0 sudo[199293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:01 compute-0 python3.9[199295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623820.344136-766-107557389295945/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:37:01 compute-0 sudo[199293]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:02 compute-0 sudo[199445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmhfbqoofcfvdxodjyphqrufnolxghlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623822.4594767-787-222741226114636/AnsiballZ_file.py'
Jan 05 14:37:02 compute-0 sudo[199445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:02 compute-0 python3.9[199447]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:03 compute-0 sudo[199445]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:03 compute-0 sudo[199597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nglcjzdqjoiebcokzwjkuvxluusgriux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623823.2074625-795-227531445218618/AnsiballZ_file.py'
Jan 05 14:37:03 compute-0 sudo[199597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:03 compute-0 python3.9[199599]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:37:03 compute-0 sudo[199597]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:04 compute-0 sudo[199749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdkixbudgakuotjypzpwrjitgxgfnkxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623823.9626634-803-177966569352888/AnsiballZ_stat.py'
Jan 05 14:37:04 compute-0 sudo[199749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:04 compute-0 python3.9[199751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:04 compute-0 sudo[199749]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:04 compute-0 sudo[199827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxcbolxncgpqmkjpdttymphrgpnboqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623823.9626634-803-177966569352888/AnsiballZ_file.py'
Jan 05 14:37:04 compute-0 sudo[199827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:05 compute-0 python3.9[199829]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.p8avpvg4 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:05 compute-0 sudo[199827]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:05 compute-0 python3.9[199979]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:06 compute-0 podman[200103]: 2026-01-05 14:37:06.636420251 +0000 UTC m=+0.114816100 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:37:06 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:37:06 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Failed with result 'exit-code'.
Jan 05 14:37:08 compute-0 sudo[200419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxtbcunqtiaqmxfavnycmrmdmxpgnqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623827.866288-840-154205673052252/AnsiballZ_container_config_data.py'
Jan 05 14:37:08 compute-0 sudo[200419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:08 compute-0 python3.9[200421]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 05 14:37:08 compute-0 sudo[200419]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:09 compute-0 sudo[200571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-henqrsgodzgximoebbqnaspensweszvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623828.8224924-851-245852223752145/AnsiballZ_container_config_hash.py'
Jan 05 14:37:09 compute-0 sudo[200571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:09 compute-0 python3.9[200573]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:37:09 compute-0 sudo[200571]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:10 compute-0 sudo[200723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiykdoznrfpwrsbgteeyoflmbrpndamy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623829.7670045-860-89389629200285/AnsiballZ_podman_container_info.py'
Jan 05 14:37:10 compute-0 sudo[200723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:10 compute-0 python3.9[200725]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:37:10 compute-0 sudo[200723]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:11 compute-0 sudo[200901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weoirggposxepgqwteloqcdxrzccfmcq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623831.4199317-873-54425667590233/AnsiballZ_edpm_container_manage.py'
Jan 05 14:37:11 compute-0 sudo[200901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:12 compute-0 python3[200903]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:37:14 compute-0 podman[200917]: 2026-01-05 14:37:14.200231446 +0000 UTC m=+1.985947853 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 05 14:37:14 compute-0 podman[201013]: 2026-01-05 14:37:14.416098549 +0000 UTC m=+0.073343933 container create 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter)
Jan 05 14:37:14 compute-0 podman[201013]: 2026-01-05 14:37:14.380929974 +0000 UTC m=+0.038175368 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 05 14:37:14 compute-0 python3[200903]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 05 14:37:14 compute-0 sudo[200901]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:15 compute-0 sudo[201201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyljrmsrepgmotjnyfglbhidwqzdpamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623834.8900974-881-152434927395571/AnsiballZ_stat.py'
Jan 05 14:37:15 compute-0 sudo[201201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:15 compute-0 python3.9[201203]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:37:15 compute-0 sudo[201201]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:16 compute-0 sudo[201366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyyxbbjjdnksloydnqokvoapvknitzan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623835.8020191-890-193059774091461/AnsiballZ_file.py'
Jan 05 14:37:16 compute-0 sudo[201366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:16 compute-0 podman[201329]: 2026-01-05 14:37:16.308089109 +0000 UTC m=+0.171253873 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 05 14:37:16 compute-0 python3.9[201372]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:16 compute-0 sudo[201366]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:16 compute-0 sudo[201455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjttisdiypfhqkfecwggsrytdnayxrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623835.8020191-890-193059774091461/AnsiballZ_stat.py'
Jan 05 14:37:16 compute-0 sudo[201455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:17 compute-0 python3.9[201457]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:37:17 compute-0 sudo[201455]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:17 compute-0 sudo[201606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtvrkwccmpdwwvrvitzoxdvqfliojnaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623837.1122332-890-270754984307152/AnsiballZ_copy.py'
Jan 05 14:37:17 compute-0 sudo[201606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:17 compute-0 python3.9[201608]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623837.1122332-890-270754984307152/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:17 compute-0 sudo[201606]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:18 compute-0 sudo[201682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aljqdsuacishvqyzmjcmusofwxkgkeiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623837.1122332-890-270754984307152/AnsiballZ_systemd.py'
Jan 05 14:37:18 compute-0 sudo[201682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:18 compute-0 python3.9[201684]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:37:18 compute-0 systemd[1]: Reloading.
Jan 05 14:37:18 compute-0 systemd-rc-local-generator[201710]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:37:18 compute-0 systemd-sysv-generator[201715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:37:18 compute-0 sudo[201682]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:19 compute-0 sudo[201792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qprgdysbpmholmjzfqxikwstlzpnxwhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623837.1122332-890-270754984307152/AnsiballZ_systemd.py'
Jan 05 14:37:19 compute-0 sudo[201792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:19 compute-0 python3.9[201794]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:37:20 compute-0 systemd[1]: Reloading.
Jan 05 14:37:20 compute-0 podman[201797]: 2026-01-05 14:37:20.834662587 +0000 UTC m=+0.119947338 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 14:37:20 compute-0 systemd-rc-local-generator[201841]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:37:20 compute-0 systemd-sysv-generator[201845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:37:21 compute-0 auditd[701]: Audit daemon rotating log files
Jan 05 14:37:21 compute-0 systemd[1]: Starting podman_exporter container...
Jan 05 14:37:21 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb8b8fd8420b08ad839428b11bc0dd8df9f429a422102f0bc030339f2d3666/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:37:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebb8b8fd8420b08ad839428b11bc0dd8df9f429a422102f0bc030339f2d3666/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:37:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.
Jan 05 14:37:21 compute-0 podman[201853]: 2026-01-05 14:37:21.316865155 +0000 UTC m=+0.225228309 container init 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.344Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.344Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.344Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.344Z caller=handler.go:105 level=info collector=container
Jan 05 14:37:21 compute-0 podman[201853]: 2026-01-05 14:37:21.363616514 +0000 UTC m=+0.271979658 container start 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:37:21 compute-0 podman[201853]: podman_exporter
Jan 05 14:37:21 compute-0 systemd[1]: Starting Podman API Service...
Jan 05 14:37:21 compute-0 systemd[1]: Started Podman API Service.
Jan 05 14:37:21 compute-0 systemd[1]: Started podman_exporter container.
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="Setting parallel job count to 25"
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="Using sqlite as database backend"
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 05 14:37:21 compute-0 sudo[201792]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 05 14:37:21 compute-0 podman[201880]: @ - - [05/Jan/2026:14:37:21 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 05 14:37:21 compute-0 podman[201880]: time="2026-01-05T14:37:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:37:21 compute-0 podman[201878]: 2026-01-05 14:37:21.477514488 +0000 UTC m=+0.095398812 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:37:21 compute-0 systemd[1]: 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b-87f33db6a159bc8.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:37:21 compute-0 systemd[1]: 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b-87f33db6a159bc8.service: Failed with result 'exit-code'.
Jan 05 14:37:21 compute-0 podman[201880]: @ - - [05/Jan/2026:14:37:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18095 "" "Go-http-client/1.1"
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.488Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.489Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 05 14:37:21 compute-0 podman_exporter[201869]: ts=2026-01-05T14:37:21.490Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 05 14:37:22 compute-0 python3.9[202066]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:37:23 compute-0 sudo[202216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrmigvktdysyenrafkbtlkulnyftcbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623842.7497165-931-178141611536781/AnsiballZ_stat.py'
Jan 05 14:37:23 compute-0 sudo[202216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:23 compute-0 python3.9[202218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:23 compute-0 sudo[202216]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:23 compute-0 sudo[202341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymrynuikbjynaimionbrfgdqymbeqatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623842.7497165-931-178141611536781/AnsiballZ_copy.py'
Jan 05 14:37:23 compute-0 sudo[202341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:24 compute-0 python3.9[202343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623842.7497165-931-178141611536781/.source.yaml _original_basename=.kzi1hrfd follow=False checksum=506524f92b03b4c80e635a53433662e29fc62dca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:24 compute-0 sudo[202341]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:24 compute-0 sudo[202493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwxtguadaelqgumlxkduohdrqwngpxcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623844.4136102-946-245784610562044/AnsiballZ_stat.py'
Jan 05 14:37:24 compute-0 sudo[202493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:24 compute-0 python3.9[202495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:25 compute-0 sudo[202493]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:25 compute-0 sudo[202616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfkgboitncbuaswicblldxpdhjtcrplo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623844.4136102-946-245784610562044/AnsiballZ_copy.py'
Jan 05 14:37:25 compute-0 sudo[202616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:25 compute-0 python3.9[202618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623844.4136102-946-245784610562044/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:37:25 compute-0 sudo[202616]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:25 compute-0 nova_compute[185474]: 2026-01-05 14:37:25.825 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:25 compute-0 nova_compute[185474]: 2026-01-05 14:37:25.844 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:25 compute-0 nova_compute[185474]: 2026-01-05 14:37:25.845 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:25 compute-0 nova_compute[185474]: 2026-01-05 14:37:25.845 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:25 compute-0 nova_compute[185474]: 2026-01-05 14:37:25.845 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:37:26 compute-0 nova_compute[185474]: 2026-01-05 14:37:26.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:26 compute-0 sudo[202768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdapznbejwgqgjatmldljjctijshcynd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623846.3810272-967-187253279550335/AnsiballZ_file.py'
Jan 05 14:37:26 compute-0 sudo[202768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:26 compute-0 python3.9[202770]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:26 compute-0 sudo[202768]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.393 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.421 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.421 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.421 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.422 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.459 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.459 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:37:27 compute-0 sudo[202920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojzfhbzbhntrbmhedvvqgrpkdddopxxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623847.1739247-975-53994109799462/AnsiballZ_file.py'
Jan 05 14:37:27 compute-0 sudo[202920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.699 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.700 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5784MB free_disk=72.61791610717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.700 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.701 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:37:27 compute-0 podman[202922]: 2026-01-05 14:37:27.706118616 +0000 UTC m=+0.106771961 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.770 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.771 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.790 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.803 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.804 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:37:27 compute-0 nova_compute[185474]: 2026-01-05 14:37:27.805 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:37:27 compute-0 python3.9[202923]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:37:27 compute-0 sudo[202920]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:28 compute-0 sudo[203096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdsvwaehpxcsfritxackqgmmmotoduzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623848.09343-983-194469521662998/AnsiballZ_stat.py'
Jan 05 14:37:28 compute-0 sudo[203096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:28 compute-0 python3.9[203098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:28 compute-0 sudo[203096]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:29 compute-0 sudo[203174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevdxihaqwvluompeycolzewgnxtugzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623848.09343-983-194469521662998/AnsiballZ_file.py'
Jan 05 14:37:29 compute-0 sudo[203174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:29 compute-0 python3.9[203176]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.gedb_2ey recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:29 compute-0 sudo[203174]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:29 compute-0 python3.9[203326]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:32 compute-0 sudo[203747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owueeyoauakwisgexhpevqfhdbxvzhcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623851.8345618-1020-189670314867885/AnsiballZ_container_config_data.py'
Jan 05 14:37:32 compute-0 sudo[203747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:32 compute-0 python3.9[203749]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 05 14:37:32 compute-0 sudo[203747]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:33 compute-0 sudo[203899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwbwbxuxgljofntdhoyfwnaivhpjqjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623852.8823488-1031-45597128652953/AnsiballZ_container_config_hash.py'
Jan 05 14:37:33 compute-0 sudo[203899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:33 compute-0 python3.9[203901]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:37:33 compute-0 sudo[203899]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:34 compute-0 sudo[204051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxpzgomrpevkuxzeergygywjbfzsxikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623853.853733-1040-15781099848786/AnsiballZ_podman_container_info.py'
Jan 05 14:37:34 compute-0 sudo[204051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:34 compute-0 python3.9[204053]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:37:34 compute-0 sudo[204051]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:35 compute-0 sudo[204230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epgfthbxjmvfyeeeyixihsyfdpaqacfg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623855.3854272-1053-231920995299124/AnsiballZ_edpm_container_manage.py'
Jan 05 14:37:35 compute-0 sudo[204230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:36 compute-0 python3[204232]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:37:38 compute-0 podman[204288]: 2026-01-05 14:37:38.231831112 +0000 UTC m=+0.854520941 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:37:38 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:37:38 compute-0 systemd[1]: 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1-4c81b73250372c03.service: Failed with result 'exit-code'.
Jan 05 14:37:38 compute-0 podman[204245]: 2026-01-05 14:37:38.583555795 +0000 UTC m=+2.451121117 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 14:37:38 compute-0 podman[204364]: 2026-01-05 14:37:38.782220391 +0000 UTC m=+0.078813431 container create 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Jan 05 14:37:38 compute-0 podman[204364]: 2026-01-05 14:37:38.746640504 +0000 UTC m=+0.043233604 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 14:37:38 compute-0 python3[204232]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 14:37:38 compute-0 sudo[204230]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:39 compute-0 sudo[204552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyrbqlqljzzerfpgytofdekzxgsdsxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623859.2454615-1061-228852106768756/AnsiballZ_stat.py'
Jan 05 14:37:39 compute-0 sudo[204552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:39 compute-0 python3.9[204554]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:37:39 compute-0 sudo[204552]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:40 compute-0 sudo[204706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lredmbdvoivoqsushcsteiehbqpdvtlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623860.2364814-1070-93349541924096/AnsiballZ_file.py'
Jan 05 14:37:40 compute-0 sudo[204706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:40 compute-0 python3.9[204708]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:40 compute-0 sudo[204706]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:41 compute-0 sudo[204782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmqkckmzmfkqquhskwhirtzivcibeml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623860.2364814-1070-93349541924096/AnsiballZ_stat.py'
Jan 05 14:37:41 compute-0 sudo[204782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:41 compute-0 python3.9[204784]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:37:41 compute-0 sudo[204782]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:42 compute-0 sudo[204933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzcxoaozcvneuvurkmjcnwptrdkwbjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623861.4432483-1070-189411165998800/AnsiballZ_copy.py'
Jan 05 14:37:42 compute-0 sudo[204933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:42 compute-0 python3.9[204935]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623861.4432483-1070-189411165998800/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:42 compute-0 sudo[204933]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:42 compute-0 sudo[205009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udbmxycksqhcxsvpoysfpjdtwnuwhaih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623861.4432483-1070-189411165998800/AnsiballZ_systemd.py'
Jan 05 14:37:42 compute-0 sudo[205009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:42 compute-0 python3.9[205011]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:37:42 compute-0 systemd[1]: Reloading.
Jan 05 14:37:43 compute-0 systemd-sysv-generator[205043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:37:43 compute-0 systemd-rc-local-generator[205037]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:37:43 compute-0 sudo[205009]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:43 compute-0 sudo[205121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpwrxbjwblzrtxlqhkmumdhbwrgefbfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623861.4432483-1070-189411165998800/AnsiballZ_systemd.py'
Jan 05 14:37:43 compute-0 sudo[205121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:43 compute-0 python3.9[205123]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:37:43 compute-0 systemd[1]: Reloading.
Jan 05 14:37:44 compute-0 systemd-rc-local-generator[205153]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:37:44 compute-0 systemd-sysv-generator[205156]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:37:44 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 05 14:37:44 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af89b23bc276f8992b42c37c689abe8c40e524b680e2afbf78ae8beacf70e66/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 05 14:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af89b23bc276f8992b42c37c689abe8c40e524b680e2afbf78ae8beacf70e66/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2af89b23bc276f8992b42c37c689abe8c40e524b680e2afbf78ae8beacf70e66/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:37:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.
Jan 05 14:37:44 compute-0 podman[205163]: 2026-01-05 14:37:44.417762042 +0000 UTC m=+0.173510627 container init 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *bridge.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *coverage.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *datapath.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *iface.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *memory.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *ovn.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *pmd_perf.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *pmd_rxq.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: INFO    14:37:44 main.go:48: registering *vswitch.Collector
Jan 05 14:37:44 compute-0 openstack_network_exporter[205179]: NOTICE  14:37:44 main.go:76: listening on https://:9105/metrics
Jan 05 14:37:44 compute-0 podman[205163]: 2026-01-05 14:37:44.453850655 +0000 UTC m=+0.209599240 container start 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, managed_by=edpm_ansible)
Jan 05 14:37:44 compute-0 podman[205163]: openstack_network_exporter
Jan 05 14:37:44 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 05 14:37:44 compute-0 sudo[205121]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:44 compute-0 podman[205189]: 2026-01-05 14:37:44.582973321 +0000 UTC m=+0.109729449 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6)
Jan 05 14:37:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:37:44.788 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:37:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:37:44.788 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:37:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:37:44.788 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:37:45 compute-0 python3.9[205361]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:37:46 compute-0 sudo[205511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxizfbyacgqrdcigbevxvksfeqwzipdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623865.8159833-1111-64545265874819/AnsiballZ_stat.py'
Jan 05 14:37:46 compute-0 sudo[205511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:46 compute-0 python3.9[205513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:37:46 compute-0 sudo[205511]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:46 compute-0 podman[205514]: 2026-01-05 14:37:46.607681239 +0000 UTC m=+0.141183766 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 14:37:47 compute-0 sudo[205662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzztkxekxomdeoabjnhyhnmehsuskajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623865.8159833-1111-64545265874819/AnsiballZ_copy.py'
Jan 05 14:37:47 compute-0 sudo[205662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:47 compute-0 python3.9[205664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623865.8159833-1111-64545265874819/.source.yaml _original_basename=.pn2cew9t follow=False checksum=17e628ccf778670c710ac3d65dd5e344c742040c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:47 compute-0 sudo[205662]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:47 compute-0 sudo[205814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqcrqqshqgtsvttzwgyqdkuqvifgeszx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623867.4694157-1126-89711653842138/AnsiballZ_find.py'
Jan 05 14:37:47 compute-0 sudo[205814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:48 compute-0 python3.9[205816]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:37:48 compute-0 sudo[205814]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:48 compute-0 sudo[205966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsrsgnwhzvophpxifttmhcofmtdtganv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623868.4856176-1136-190738353311417/AnsiballZ_podman_container_info.py'
Jan 05 14:37:48 compute-0 sudo[205966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:49 compute-0 python3.9[205968]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 05 14:37:49 compute-0 sudo[205966]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:49 compute-0 sudo[206130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhqicbpedylyhmkruixmpdpsejjidot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623869.4422562-1144-188124148891120/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:49 compute-0 sudo[206130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:50 compute-0 python3.9[206132]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:50 compute-0 systemd[1]: Started libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope.
Jan 05 14:37:50 compute-0 podman[206133]: 2026-01-05 14:37:50.340882165 +0000 UTC m=+0.109504533 container exec eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 14:37:50 compute-0 podman[206133]: 2026-01-05 14:37:50.375990152 +0000 UTC m=+0.144612540 container exec_died eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:37:50 compute-0 systemd[1]: libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope: Deactivated successfully.
Jan 05 14:37:50 compute-0 sudo[206130]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:51 compute-0 sudo[206314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgqlhuodjmtaczqdvnczwudderavovtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623870.7082884-1152-38343022009947/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:51 compute-0 sudo[206314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:51 compute-0 podman[206316]: 2026-01-05 14:37:51.180451949 +0000 UTC m=+0.087482773 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 05 14:37:51 compute-0 python3.9[206317]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:51 compute-0 systemd[1]: Started libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope.
Jan 05 14:37:51 compute-0 podman[206338]: 2026-01-05 14:37:51.446971537 +0000 UTC m=+0.126277439 container exec eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 14:37:51 compute-0 podman[206338]: 2026-01-05 14:37:51.458681746 +0000 UTC m=+0.137987678 container exec_died eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:37:51 compute-0 sudo[206314]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:51 compute-0 systemd[1]: libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope: Deactivated successfully.
Jan 05 14:37:51 compute-0 podman[206371]: 2026-01-05 14:37:51.663933235 +0000 UTC m=+0.092300504 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:37:52 compute-0 sudo[206546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncmnfjwbcczxyzmomzsaavtnvhsvniir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623871.7840986-1160-155269859084942/AnsiballZ_file.py'
Jan 05 14:37:52 compute-0 sudo[206546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:52 compute-0 python3.9[206548]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:52 compute-0 sudo[206546]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:53 compute-0 sudo[206698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqcecushwdkrismyjyyoqxqabcoxudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623872.6549034-1169-221192268240984/AnsiballZ_podman_container_info.py'
Jan 05 14:37:53 compute-0 sudo[206698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:53 compute-0 python3.9[206700]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 05 14:37:53 compute-0 sudo[206698]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:54 compute-0 sudo[206863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzgxsvhuzodzxtmprpvovzuagonknwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623873.6502976-1177-208570823668838/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:54 compute-0 sudo[206863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:54 compute-0 python3.9[206865]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:54 compute-0 systemd[1]: Started libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope.
Jan 05 14:37:54 compute-0 podman[206866]: 2026-01-05 14:37:54.403605935 +0000 UTC m=+0.112877285 container exec c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 14:37:54 compute-0 podman[206866]: 2026-01-05 14:37:54.437729935 +0000 UTC m=+0.147001295 container exec_died c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:37:54 compute-0 systemd[1]: libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope: Deactivated successfully.
Jan 05 14:37:54 compute-0 sudo[206863]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:55 compute-0 sudo[207048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbmbriploxpfpqrqgnjtdjfwzrkdwjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623874.7717357-1185-79372998001961/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:55 compute-0 sudo[207048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:55 compute-0 python3.9[207050]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:55 compute-0 systemd[1]: Started libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope.
Jan 05 14:37:55 compute-0 podman[207051]: 2026-01-05 14:37:55.489518238 +0000 UTC m=+0.104167688 container exec c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:37:55 compute-0 podman[207051]: 2026-01-05 14:37:55.501573406 +0000 UTC m=+0.116222806 container exec_died c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 14:37:55 compute-0 sudo[207048]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:55 compute-0 systemd[1]: libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope: Deactivated successfully.
Jan 05 14:37:56 compute-0 sudo[207232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvsbuszyiyklynkzdklexhqnhgjlmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623875.8213224-1193-196194410294974/AnsiballZ_file.py'
Jan 05 14:37:56 compute-0 sudo[207232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:56 compute-0 python3.9[207234]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:37:56 compute-0 sudo[207232]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:57 compute-0 sudo[207384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aichpsgvmlsofjcjadktmkkcvcvlggdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623876.7394688-1202-252725387357627/AnsiballZ_podman_container_info.py'
Jan 05 14:37:57 compute-0 sudo[207384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:57 compute-0 python3.9[207386]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 05 14:37:57 compute-0 sudo[207384]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:58 compute-0 sudo[207560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtiqmdkiovescehoqnctxvmunpsfzicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623877.6856048-1210-137887041349225/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:58 compute-0 sudo[207560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:58 compute-0 podman[207523]: 2026-01-05 14:37:58.103040121 +0000 UTC m=+0.097917008 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:37:58 compute-0 python3.9[207566]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:58 compute-0 systemd[1]: Started libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope.
Jan 05 14:37:58 compute-0 podman[207576]: 2026-01-05 14:37:58.399348101 +0000 UTC m=+0.108635960 container exec 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251224, tcib_managed=true)
Jan 05 14:37:58 compute-0 podman[207576]: 2026-01-05 14:37:58.436535563 +0000 UTC m=+0.145823422 container exec_died 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute)
Jan 05 14:37:58 compute-0 systemd[1]: libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope: Deactivated successfully.
Jan 05 14:37:58 compute-0 sudo[207560]: pam_unix(sudo:session): session closed for user root
Jan 05 14:37:59 compute-0 sudo[207758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdwsakbsxyrdrmxhxlomlekxbkhgaora ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623878.840322-1218-52677892230374/AnsiballZ_podman_container_exec.py'
Jan 05 14:37:59 compute-0 sudo[207758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:37:59 compute-0 python3.9[207760]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:37:59 compute-0 systemd[1]: Started libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope.
Jan 05 14:37:59 compute-0 podman[207761]: 2026-01-05 14:37:59.596311547 +0000 UTC m=+0.098424151 container exec 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 05 14:37:59 compute-0 podman[207761]: 2026-01-05 14:37:59.632603305 +0000 UTC m=+0.134715829 container exec_died 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 14:37:59 compute-0 systemd[1]: libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope: Deactivated successfully.
Jan 05 14:37:59 compute-0 sudo[207758]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:00 compute-0 sudo[207940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltjoesgcctpjckbzxaffvrehdgtiaixn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623879.9581115-1226-145480038142917/AnsiballZ_file.py'
Jan 05 14:38:00 compute-0 sudo[207940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:00 compute-0 python3.9[207942]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:00 compute-0 sudo[207940]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:01 compute-0 sudo[208092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqjxvnnsivdplqihhixbmsmkdlotvxqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623880.867017-1235-5988934018810/AnsiballZ_podman_container_info.py'
Jan 05 14:38:01 compute-0 sudo[208092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:01 compute-0 python3.9[208094]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 05 14:38:01 compute-0 sudo[208092]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:02 compute-0 sudo[208258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attqysujsoyzqvdncuziyephgpanblal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623881.8589747-1243-44054935879968/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:02 compute-0 sudo[208258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:02 compute-0 python3.9[208260]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:02 compute-0 systemd[1]: Started libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope.
Jan 05 14:38:02 compute-0 podman[208261]: 2026-01-05 14:38:02.534453511 +0000 UTC m=+0.079819555 container exec fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:38:02 compute-0 podman[208261]: 2026-01-05 14:38:02.567860101 +0000 UTC m=+0.113226165 container exec_died fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:38:02 compute-0 systemd[1]: libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope: Deactivated successfully.
Jan 05 14:38:02 compute-0 sudo[208258]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:03 compute-0 sudo[208444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okpfszpbqioyavpznpqbrpmnmofrurqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623882.8357072-1251-264457813106988/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:03 compute-0 sudo[208444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:03 compute-0 python3.9[208446]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:03 compute-0 systemd[1]: Started libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope.
Jan 05 14:38:03 compute-0 podman[208447]: 2026-01-05 14:38:03.551801496 +0000 UTC m=+0.102397059 container exec fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:38:03 compute-0 podman[208447]: 2026-01-05 14:38:03.587780386 +0000 UTC m=+0.138375949 container exec_died fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:38:03 compute-0 systemd[1]: libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope: Deactivated successfully.
Jan 05 14:38:03 compute-0 sudo[208444]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:04 compute-0 sudo[208628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzlbfpvnrcvgurmenxxvwmskeexfviqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623883.9172316-1259-240719243673116/AnsiballZ_file.py'
Jan 05 14:38:04 compute-0 sudo[208628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:04 compute-0 python3.9[208630]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:04 compute-0 sudo[208628]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:05 compute-0 sudo[208780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opszfdheswzlkurbgfuxkcmrgrjgfcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623884.782827-1268-210648944290131/AnsiballZ_podman_container_info.py'
Jan 05 14:38:05 compute-0 sudo[208780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:05 compute-0 python3.9[208782]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 05 14:38:05 compute-0 sudo[208780]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:06 compute-0 sudo[208945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwhjseohbiifzeyejvjtbgzgbxejvjge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623885.7117975-1276-111395230828144/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:06 compute-0 sudo[208945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:06 compute-0 python3.9[208947]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:06 compute-0 systemd[1]: Started libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope.
Jan 05 14:38:06 compute-0 podman[208948]: 2026-01-05 14:38:06.578478661 +0000 UTC m=+0.144357051 container exec 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:38:06 compute-0 podman[208948]: 2026-01-05 14:38:06.608837218 +0000 UTC m=+0.174715608 container exec_died 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:38:06 compute-0 sudo[208945]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:06 compute-0 systemd[1]: libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope: Deactivated successfully.
Jan 05 14:38:07 compute-0 sudo[209130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljrxmtmqnxevecncobhtzoxcrkxybqoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623886.8802035-1284-190384070809090/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:07 compute-0 sudo[209130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:07 compute-0 python3.9[209132]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:07 compute-0 systemd[1]: Started libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope.
Jan 05 14:38:07 compute-0 podman[209133]: 2026-01-05 14:38:07.592548387 +0000 UTC m=+0.100930839 container exec 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:38:07 compute-0 podman[209133]: 2026-01-05 14:38:07.628718662 +0000 UTC m=+0.137101074 container exec_died 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:38:07 compute-0 systemd[1]: libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope: Deactivated successfully.
Jan 05 14:38:07 compute-0 sudo[209130]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:08 compute-0 sudo[209314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipebefbcdanbyipllnjkfmigczsyncow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623887.902171-1292-94877779682817/AnsiballZ_file.py'
Jan 05 14:38:08 compute-0 sudo[209314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:08 compute-0 python3.9[209316]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:08 compute-0 sudo[209314]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:08 compute-0 podman[209317]: 2026-01-05 14:38:08.627744419 +0000 UTC m=+0.108917737 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 14:38:09 compute-0 sudo[209486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnngrofeyambjzosanbxjbthqkbdbaif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623888.873086-1301-160607886883520/AnsiballZ_podman_container_info.py'
Jan 05 14:38:09 compute-0 sudo[209486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:09 compute-0 python3.9[209488]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 05 14:38:09 compute-0 sudo[209486]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:10 compute-0 sudo[209651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhdpezlmapwoyjeociwjgpneownnqdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623889.961875-1309-276721050807357/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:10 compute-0 sudo[209651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:11 compute-0 python3.9[209653]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:11 compute-0 systemd[1]: Started libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope.
Jan 05 14:38:11 compute-0 podman[209654]: 2026-01-05 14:38:11.176305994 +0000 UTC m=+0.104583050 container exec 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Jan 05 14:38:11 compute-0 podman[209654]: 2026-01-05 14:38:11.212807048 +0000 UTC m=+0.141084134 container exec_died 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Jan 05 14:38:11 compute-0 systemd[1]: libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope: Deactivated successfully.
Jan 05 14:38:11 compute-0 sudo[209651]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:11 compute-0 sudo[209833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaxchgtvljarjgyeywfpxktmenxycvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623891.538566-1317-133514672523159/AnsiballZ_podman_container_exec.py'
Jan 05 14:38:11 compute-0 sudo[209833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:12 compute-0 python3.9[209835]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:38:12 compute-0 systemd[1]: Started libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope.
Jan 05 14:38:12 compute-0 podman[209836]: 2026-01-05 14:38:12.27376331 +0000 UTC m=+0.109020590 container exec 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Jan 05 14:38:12 compute-0 podman[209836]: 2026-01-05 14:38:12.30899277 +0000 UTC m=+0.144250010 container exec_died 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Jan 05 14:38:12 compute-0 systemd[1]: libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope: Deactivated successfully.
Jan 05 14:38:12 compute-0 sudo[209833]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:12 compute-0 sudo[210016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejarqikfweudzplzcitqmqkwitlubsyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623892.5841515-1325-190346302751509/AnsiballZ_file.py'
Jan 05 14:38:12 compute-0 sudo[210016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:13 compute-0 python3.9[210018]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:13 compute-0 sudo[210016]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:13 compute-0 sudo[210168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmscyiyglzhvxasrmcddjzurifawvhdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623893.4904425-1334-54042435350750/AnsiballZ_file.py'
Jan 05 14:38:13 compute-0 sudo[210168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:14 compute-0 python3.9[210170]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:14 compute-0 sudo[210168]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:14 compute-0 sudo[210320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahibxdohuirdvicqlzhiejbpolvdloql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623894.285944-1342-106930503483931/AnsiballZ_stat.py'
Jan 05 14:38:14 compute-0 sudo[210320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:14 compute-0 python3.9[210322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:14 compute-0 sudo[210320]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:15 compute-0 podman[210417]: 2026-01-05 14:38:15.401911159 +0000 UTC m=+0.115786224 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_id=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 14:38:15 compute-0 sudo[210462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzznwvmlxxjdjovkocokyynwkucaose ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623894.285944-1342-106930503483931/AnsiballZ_copy.py'
Jan 05 14:38:15 compute-0 sudo[210462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:15 compute-0 python3.9[210466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623894.285944-1342-106930503483931/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:15 compute-0 sudo[210462]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:16 compute-0 sudo[210617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypodlnocdaplubemivlkqdryokxrofzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623895.887039-1358-137229298493798/AnsiballZ_file.py'
Jan 05 14:38:16 compute-0 sudo[210617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:16 compute-0 python3.9[210619]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:16 compute-0 sudo[210617]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:17 compute-0 sudo[210782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmixmsgkpifqcniyotuzzdrtmbdkdpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623896.6774724-1366-259768939596802/AnsiballZ_stat.py'
Jan 05 14:38:17 compute-0 sudo[210782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:17 compute-0 podman[210743]: 2026-01-05 14:38:17.130733659 +0000 UTC m=+0.119561467 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:38:17 compute-0 python3.9[210791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:17 compute-0 sudo[210782]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:17 compute-0 sudo[210873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdfwgnytnvrrbmkttmpfxhtgdskklylf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623896.6774724-1366-259768939596802/AnsiballZ_file.py'
Jan 05 14:38:17 compute-0 sudo[210873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:17 compute-0 python3.9[210875]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:17 compute-0 sudo[210873]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:18 compute-0 sudo[211025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbadojwxtnnnibgyhjpjndahcavppbnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623898.1334877-1378-210182753456407/AnsiballZ_stat.py'
Jan 05 14:38:18 compute-0 sudo[211025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:18 compute-0 python3.9[211027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:18 compute-0 sudo[211025]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:19 compute-0 sudo[211103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkeosedruzmpeviyhmzgcevjbpbvdhyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623898.1334877-1378-210182753456407/AnsiballZ_file.py'
Jan 05 14:38:19 compute-0 sudo[211103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:19 compute-0 python3.9[211105]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t3f83jph recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:19 compute-0 sudo[211103]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:19 compute-0 sudo[211255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihqyetemufnrxvgmaulkltnqqmqzyzzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623899.442111-1390-247468446456034/AnsiballZ_stat.py'
Jan 05 14:38:19 compute-0 sudo[211255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:20 compute-0 python3.9[211257]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:20 compute-0 sudo[211255]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:20 compute-0 sudo[211333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyitsrjtktgxymctelllvhxijsmusxhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623899.442111-1390-247468446456034/AnsiballZ_file.py'
Jan 05 14:38:20 compute-0 sudo[211333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:20 compute-0 python3.9[211335]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:20 compute-0 sudo[211333]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:21 compute-0 sudo[211498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhoownpymtwymftbmiczueyeeuqoaqms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623900.9723113-1403-225338703807089/AnsiballZ_command.py'
Jan 05 14:38:21 compute-0 sudo[211498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:21 compute-0 podman[211459]: 2026-01-05 14:38:21.441002791 +0000 UTC m=+0.074432719 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 05 14:38:21 compute-0 python3.9[211506]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:21 compute-0 sudo[211498]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:22 compute-0 podman[211631]: 2026-01-05 14:38:22.434951359 +0000 UTC m=+0.068799115 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:38:22 compute-0 sudo[211672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpjaswrlxcyksrgffvyocjcsgezntfo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623901.887434-1411-159141513006880/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 14:38:22 compute-0 sudo[211672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:22 compute-0 python3[211684]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 14:38:22 compute-0 sudo[211672]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:23 compute-0 sudo[211834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-posotholpxsifuodrborxmgfblivggbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623902.8628855-1419-106017366998552/AnsiballZ_stat.py'
Jan 05 14:38:23 compute-0 sudo[211834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:23 compute-0 python3.9[211836]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:23 compute-0 sudo[211834]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:23 compute-0 sudo[211912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpyxnvcfbyomcltrknmktengwbauvktt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623902.8628855-1419-106017366998552/AnsiballZ_file.py'
Jan 05 14:38:23 compute-0 sudo[211912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:24 compute-0 python3.9[211914]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:24 compute-0 sudo[211912]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:25 compute-0 sudo[212064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cshacumonvbmjhnvzqpsbmqadzoouoxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623904.5164266-1431-2201017719264/AnsiballZ_stat.py'
Jan 05 14:38:25 compute-0 sudo[212064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:25 compute-0 python3.9[212066]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:25 compute-0 sudo[212064]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:25 compute-0 sudo[212142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnvswpxtpdqgisefuudzhwjdmrjzlchd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623904.5164266-1431-2201017719264/AnsiballZ_file.py'
Jan 05 14:38:25 compute-0 sudo[212142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:25 compute-0 nova_compute[185474]: 2026-01-05 14:38:25.782 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:25 compute-0 python3.9[212144]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:25 compute-0 sudo[212142]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:26 compute-0 nova_compute[185474]: 2026-01-05 14:38:26.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:26 compute-0 nova_compute[185474]: 2026-01-05 14:38:26.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:26 compute-0 nova_compute[185474]: 2026-01-05 14:38:26.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:26 compute-0 nova_compute[185474]: 2026-01-05 14:38:26.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:38:26 compute-0 sudo[212294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gantqdnaqgjlfmndktunrlhxaevyiqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623906.1346278-1443-24498043915803/AnsiballZ_stat.py'
Jan 05 14:38:26 compute-0 sudo[212294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:26 compute-0 python3.9[212296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:26 compute-0 sudo[212294]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:27 compute-0 sudo[212372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtymjfbmzshdkrdjgsvhsmnvadyfuem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623906.1346278-1443-24498043915803/AnsiballZ_file.py'
Jan 05 14:38:27 compute-0 sudo[212372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:27 compute-0 python3.9[212374]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:27 compute-0 sudo[212372]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:27 compute-0 sudo[212524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqirpnshkdmmisrhyluvjtqxwjmbimov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623907.550788-1455-43924721979168/AnsiballZ_stat.py'
Jan 05 14:38:28 compute-0 sudo[212524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:28 compute-0 python3.9[212526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:28 compute-0 sudo[212524]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.437 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.438 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:38:28 compute-0 sudo[212615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwsfrverbvhennzwbevmacotcmjfddjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623907.550788-1455-43924721979168/AnsiballZ_file.py'
Jan 05 14:38:28 compute-0 sudo[212615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:28 compute-0 podman[212576]: 2026-01-05 14:38:28.578436723 +0000 UTC m=+0.088489151 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.657 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.658 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5844MB free_disk=72.48148727416992GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.658 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.658 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.758 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.759 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:38:28 compute-0 python3.9[212628]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:28 compute-0 sudo[212615]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.795 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.820 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.822 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:38:28 compute-0 nova_compute[185474]: 2026-01-05 14:38:28.823 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:38:29 compute-0 sudo[212778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctwdbefppjxusykmojjcxwfwmhdqfbyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623909.0029624-1467-274729492355971/AnsiballZ_stat.py'
Jan 05 14:38:29 compute-0 sudo[212778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:29 compute-0 python3.9[212780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:38:29 compute-0 sudo[212778]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:29 compute-0 nova_compute[185474]: 2026-01-05 14:38:29.819 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:29 compute-0 nova_compute[185474]: 2026-01-05 14:38:29.820 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:38:29 compute-0 nova_compute[185474]: 2026-01-05 14:38:29.820 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:38:29 compute-0 nova_compute[185474]: 2026-01-05 14:38:29.820 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:38:29 compute-0 nova_compute[185474]: 2026-01-05 14:38:29.838 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:38:30 compute-0 sudo[212903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfbsuqffwiqwyypnwgxzpoezoxbkthk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623909.0029624-1467-274729492355971/AnsiballZ_copy.py'
Jan 05 14:38:30 compute-0 sudo[212903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:30 compute-0 python3.9[212905]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767623909.0029624-1467-274729492355971/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:30 compute-0 sudo[212903]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:31 compute-0 sudo[213055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvaliyzpmwipcxtfgeiukifmbbqgjfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623910.6991394-1482-109079262787391/AnsiballZ_file.py'
Jan 05 14:38:31 compute-0 sudo[213055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:31 compute-0 python3.9[213057]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:31 compute-0 sudo[213055]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:31 compute-0 sudo[213207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytwzeeyhmbvvpqihozmdtrxznxhucdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623911.5752864-1490-175285226340404/AnsiballZ_command.py'
Jan 05 14:38:31 compute-0 sudo[213207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:32 compute-0 python3.9[213209]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:32 compute-0 sudo[213207]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:32 compute-0 sudo[213362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxrpwytvajpxjvkjxhjsrrpgyvmirbqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623912.4767857-1498-150779581241088/AnsiballZ_blockinfile.py'
Jan 05 14:38:32 compute-0 sudo[213362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:33 compute-0 python3.9[213364]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:33 compute-0 sudo[213362]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:33 compute-0 sudo[213514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmfjexwlphojrpjkxfzjqsbdwuzdrqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623913.5445988-1507-87094598547040/AnsiballZ_command.py'
Jan 05 14:38:33 compute-0 sudo[213514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:34 compute-0 python3.9[213516]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:34 compute-0 sudo[213514]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:34 compute-0 sudo[213667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzlkrohrtasbajdnbxfrzalmnqcligs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623914.4063032-1515-230975925132691/AnsiballZ_stat.py'
Jan 05 14:38:34 compute-0 sudo[213667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:34 compute-0 python3.9[213669]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:38:34 compute-0 sudo[213667]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:35 compute-0 sudo[213821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcktbanobwzlxexozemsssbcjprxouqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623915.1494117-1523-223546424349666/AnsiballZ_command.py'
Jan 05 14:38:35 compute-0 sudo[213821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:35 compute-0 python3.9[213823]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:35 compute-0 sudo[213821]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:36 compute-0 sudo[213976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kywnbovsecxptutuwrwudobfurlslduo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623915.996642-1531-265345184386857/AnsiballZ_file.py'
Jan 05 14:38:36 compute-0 sudo[213976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:36 compute-0 python3.9[213978]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:36 compute-0 sudo[213976]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:36 compute-0 podman[201880]: time="2026-01-05T14:38:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:38:36 compute-0 podman[201880]: @ - - [05/Jan/2026:14:38:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 05 14:38:36 compute-0 podman[201880]: @ - - [05/Jan/2026:14:38:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2996 "" "Go-http-client/1.1"
Jan 05 14:38:37 compute-0 sshd-session[185781]: Connection closed by 192.168.122.30 port 53954
Jan 05 14:38:37 compute-0 sshd-session[185778]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:38:37 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 05 14:38:37 compute-0 systemd[1]: session-26.scope: Consumed 2min 18.094s CPU time.
Jan 05 14:38:37 compute-0 systemd-logind[795]: Session 26 logged out. Waiting for processes to exit.
Jan 05 14:38:37 compute-0 systemd-logind[795]: Removed session 26.
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.746 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.747 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.747 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.750 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.751 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.752 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.753 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:38:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:38:38 compute-0 openstack_network_exporter[205179]: ERROR   14:38:38 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:38:38 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:38:38 compute-0 openstack_network_exporter[205179]: ERROR   14:38:38 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:38:38 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:38:39 compute-0 podman[214011]: 2026-01-05 14:38:39.09065152 +0000 UTC m=+0.076747990 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true)
Jan 05 14:38:42 compute-0 sshd-session[214032]: Accepted publickey for zuul from 192.168.122.30 port 46298 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:38:42 compute-0 systemd-logind[795]: New session 27 of user zuul.
Jan 05 14:38:42 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 05 14:38:42 compute-0 sshd-session[214032]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:38:43 compute-0 sudo[214185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfgswpnaqclmrukrdaxccocrajoyntim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623922.8456173-24-20603086011579/AnsiballZ_systemd_service.py'
Jan 05 14:38:43 compute-0 sudo[214185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:43 compute-0 sshd-session[214186]: Invalid user solv from 165.22.168.95 port 53076
Jan 05 14:38:43 compute-0 python3.9[214189]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:38:43 compute-0 systemd[1]: Reloading.
Jan 05 14:38:43 compute-0 sshd-session[214186]: Connection closed by invalid user solv 165.22.168.95 port 53076 [preauth]
Jan 05 14:38:43 compute-0 systemd-sysv-generator[214220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:38:43 compute-0 systemd-rc-local-generator[214217]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:38:44 compute-0 sudo[214185]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:38:44.789 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:38:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:38:44.790 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:38:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:38:44.790 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:38:45 compute-0 python3.9[214374]: ansible-ansible.builtin.service_facts Invoked
Jan 05 14:38:45 compute-0 network[214391]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 14:38:45 compute-0 network[214392]: 'network-scripts' will be removed from distribution in near future.
Jan 05 14:38:45 compute-0 network[214393]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 14:38:46 compute-0 podman[214399]: 2026-01-05 14:38:46.243145093 +0000 UTC m=+0.109817022 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter)
Jan 05 14:38:47 compute-0 podman[214457]: 2026-01-05 14:38:47.364311012 +0000 UTC m=+0.170176496 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 05 14:38:51 compute-0 sudo[214708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lltpiocjasjimzpkfdqpbpbqajepukxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623930.7194371-47-59611197952899/AnsiballZ_systemd_service.py'
Jan 05 14:38:51 compute-0 sudo[214708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:51 compute-0 python3.9[214710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:38:51 compute-0 podman[214713]: 2026-01-05 14:38:51.561381894 +0000 UTC m=+0.074639539 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 14:38:52 compute-0 sudo[214708]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:52 compute-0 podman[214733]: 2026-01-05 14:38:52.592386269 +0000 UTC m=+0.075571615 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 14:38:53 compute-0 sudo[214904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bknxvvgvxdvkthgbecvdjoftqdlpxdkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623932.8311214-57-77656848093041/AnsiballZ_file.py'
Jan 05 14:38:53 compute-0 sudo[214904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:53 compute-0 python3.9[214906]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:53 compute-0 sudo[214904]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:54 compute-0 sudo[215057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpuqspylewxbyaczjehwxdzcjixwpkld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623933.8038194-65-174574749994979/AnsiballZ_file.py'
Jan 05 14:38:54 compute-0 sudo[215057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:54 compute-0 python3.9[215059]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:38:54 compute-0 sudo[215057]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:55 compute-0 sudo[215209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxotnoynqiasqcsdwdylimdgtuygscwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623934.7406645-74-280427534642437/AnsiballZ_command.py'
Jan 05 14:38:55 compute-0 sudo[215209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:55 compute-0 python3.9[215211]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:55 compute-0 sudo[215209]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:56 compute-0 python3.9[215363]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:38:57 compute-0 sudo[215513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzmooxzmauhhqkurgrmuimwcqipukcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623936.8312163-92-51643802366961/AnsiballZ_systemd_service.py'
Jan 05 14:38:57 compute-0 sudo[215513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:57 compute-0 python3.9[215515]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:38:57 compute-0 systemd[1]: Reloading.
Jan 05 14:38:57 compute-0 systemd-rc-local-generator[215544]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:38:57 compute-0 systemd-sysv-generator[215548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:38:57 compute-0 sudo[215513]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:58 compute-0 sudo[215700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwwfuflkzuyetpcauwhvhkxbfokdnmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623938.1745033-100-188802893516350/AnsiballZ_command.py'
Jan 05 14:38:58 compute-0 sudo[215700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:58 compute-0 python3.9[215702]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:38:58 compute-0 sudo[215700]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:58 compute-0 podman[215704]: 2026-01-05 14:38:58.914565718 +0000 UTC m=+0.088069915 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:38:59 compute-0 sudo[215874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzrkmpcinaduzfrxdznqfxrcuxcvymqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623939.1270416-109-281063289228479/AnsiballZ_file.py'
Jan 05 14:38:59 compute-0 sudo[215874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:38:59 compute-0 python3.9[215876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:38:59 compute-0 podman[201880]: time="2026-01-05T14:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:38:59 compute-0 sudo[215874]: pam_unix(sudo:session): session closed for user root
Jan 05 14:38:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 05 14:38:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Jan 05 14:39:00 compute-0 python3.9[216026]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:01 compute-0 openstack_network_exporter[205179]: ERROR   14:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:39:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:39:01 compute-0 openstack_network_exporter[205179]: ERROR   14:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:39:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:39:01 compute-0 python3.9[216178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:02 compute-0 python3.9[216299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623940.9680138-125-196003617929749/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:03 compute-0 python3.9[216449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:03 compute-0 python3.9[216570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623942.6542346-140-59194115107720/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:04 compute-0 sudo[216720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbkocbvyqhdvritmgmrgywezwclgvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623944.2505302-158-220283106551966/AnsiballZ_getent.py'
Jan 05 14:39:04 compute-0 sudo[216720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:05 compute-0 python3.9[216722]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 05 14:39:05 compute-0 sudo[216720]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:06 compute-0 python3.9[216873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:07 compute-0 python3.9[216994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623945.8942475-186-6760781108192/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:08 compute-0 python3.9[217144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:08 compute-0 python3.9[217265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623947.4983916-186-280064661909383/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:09 compute-0 podman[217389]: 2026-01-05 14:39:09.427496863 +0000 UTC m=+0.089514145 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 05 14:39:09 compute-0 python3.9[217427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:10 compute-0 python3.9[217553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767623949.0149724-186-139289179871818/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:11 compute-0 python3.9[217703]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:11 compute-0 python3.9[217855]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:12 compute-0 python3.9[218007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:13 compute-0 python3.9[218128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623952.0354629-245-25969714071302/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:13 compute-0 sudo[218278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baagodoljahlidgafsoxqpvwwycxtulq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623953.2798564-260-5656542359249/AnsiballZ_file.py'
Jan 05 14:39:13 compute-0 sudo[218278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:13 compute-0 python3.9[218280]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:13 compute-0 sudo[218278]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:14 compute-0 sudo[218430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onjuazkctitsofptawyjgtcpjlqlhvqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623954.1082845-268-28881026622213/AnsiballZ_file.py'
Jan 05 14:39:14 compute-0 sudo[218430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:14 compute-0 python3.9[218432]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:14 compute-0 sudo[218430]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:15 compute-0 sudo[218582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpbgwiwleildjngxygucyekgjmvgixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623954.8822281-276-194776046912019/AnsiballZ_file.py'
Jan 05 14:39:15 compute-0 sudo[218582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:15 compute-0 python3.9[218584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:15 compute-0 sudo[218582]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:16 compute-0 sudo[218734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cotwobcxzkoncwhqyfdlkdzvpkliygli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/AnsiballZ_stat.py'
Jan 05 14:39:16 compute-0 sudo[218734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:16 compute-0 python3.9[218736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:16 compute-0 sudo[218734]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:16 compute-0 podman[218793]: 2026-01-05 14:39:16.60908354 +0000 UTC m=+0.096730964 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 14:39:16 compute-0 sudo[218876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkrjvlvqcbbdkjmbimlepgikzivmunad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/AnsiballZ_copy.py'
Jan 05 14:39:16 compute-0 sudo[218876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:17 compute-0 python3.9[218878]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:17 compute-0 sudo[218876]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:17 compute-0 sudo[218952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtnakczuqjgtydlaivmkowpawlojxuel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/AnsiballZ_stat.py'
Jan 05 14:39:17 compute-0 sudo[218952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:17 compute-0 python3.9[218954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:17 compute-0 sudo[218952]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:17 compute-0 podman[218955]: 2026-01-05 14:39:17.641603598 +0000 UTC m=+0.127991952 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 05 14:39:18 compute-0 sudo[219102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cufkcdkxqilzxhemklvvppvvdkugmbtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/AnsiballZ_copy.py'
Jan 05 14:39:18 compute-0 sudo[219102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:18 compute-0 python3.9[219104]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623955.718606-284-57426595669318/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:18 compute-0 sudo[219102]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:18 compute-0 sudo[219254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrldeklrtbcxksusebtvwmbmteooewoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623958.4450219-284-73016426577415/AnsiballZ_stat.py'
Jan 05 14:39:18 compute-0 sudo[219254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:19 compute-0 python3.9[219256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:19 compute-0 sudo[219254]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:19 compute-0 sudo[219377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hboxapfmzktzjwpwtrnytpvyktsgspqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623958.4450219-284-73016426577415/AnsiballZ_copy.py'
Jan 05 14:39:19 compute-0 sudo[219377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:19 compute-0 python3.9[219379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767623958.4450219-284-73016426577415/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:19 compute-0 sudo[219377]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:20 compute-0 sudo[219529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgtbqsjiefaplbkgxpahypprmsmmsfdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623960.2963219-326-270760708884019/AnsiballZ_file.py'
Jan 05 14:39:20 compute-0 sudo[219529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:20 compute-0 python3.9[219531]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:20 compute-0 sudo[219529]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:21 compute-0 sudo[219681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupengyjkjaishltnkncfdommendegdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623961.1173937-334-123535371982971/AnsiballZ_file.py'
Jan 05 14:39:21 compute-0 sudo[219681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:21 compute-0 python3.9[219683]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:21 compute-0 sudo[219681]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:22 compute-0 sudo[219846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guwrgsyowrwpweclllbrfjeggijgagsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623961.9058986-342-212018941345391/AnsiballZ_stat.py'
Jan 05 14:39:22 compute-0 sudo[219846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:22 compute-0 podman[219807]: 2026-01-05 14:39:22.293092801 +0000 UTC m=+0.088996992 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:39:22 compute-0 python3.9[219852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:22 compute-0 sudo[219846]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:22 compute-0 sudo[219988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsahvrbrgbpwxezvookikgwhojprdrke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623961.9058986-342-212018941345391/AnsiballZ_copy.py'
Jan 05 14:39:22 compute-0 sudo[219988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:22 compute-0 podman[219949]: 2026-01-05 14:39:22.971413631 +0000 UTC m=+0.102089337 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:39:23 compute-0 python3.9[220001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623961.9058986-342-212018941345391/.source.json _original_basename=.2179ow3h follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:23 compute-0 sudo[219988]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:23 compute-0 python3.9[220151]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:26 compute-0 sudo[220572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sozlxszuufigeedlvcaizejsfnjnsfwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623965.7920015-382-174133225202152/AnsiballZ_container_config_data.py'
Jan 05 14:39:26 compute-0 sudo[220572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:26 compute-0 python3.9[220574]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 05 14:39:26 compute-0 sudo[220572]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:27 compute-0 nova_compute[185474]: 2026-01-05 14:39:27.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:27 compute-0 nova_compute[185474]: 2026-01-05 14:39:27.424 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:27 compute-0 nova_compute[185474]: 2026-01-05 14:39:27.424 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:27 compute-0 sudo[220724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggrcwyievhaattznmiprirdqiahfdsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623966.937272-393-140121330372780/AnsiballZ_container_config_hash.py'
Jan 05 14:39:27 compute-0 sudo[220724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:27 compute-0 python3.9[220726]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:39:27 compute-0 sudo[220724]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:28 compute-0 nova_compute[185474]: 2026-01-05 14:39:28.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:28 compute-0 nova_compute[185474]: 2026-01-05 14:39:28.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:28 compute-0 nova_compute[185474]: 2026-01-05 14:39:28.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:28 compute-0 nova_compute[185474]: 2026-01-05 14:39:28.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:39:28 compute-0 sudo[220876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfndydhaevnjymjjlmelytcamhnduhiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623968.107715-402-16216611969127/AnsiballZ_podman_container_info.py'
Jan 05 14:39:28 compute-0 sudo[220876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:28 compute-0 python3.9[220878]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:39:29 compute-0 sudo[220876]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.396 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.447 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.448 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.448 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.448 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:39:29 compute-0 podman[220930]: 2026-01-05 14:39:29.614636632 +0000 UTC m=+0.093037387 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.675 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.676 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5854MB free_disk=72.48181915283203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.676 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.676 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:39:29 compute-0 podman[201880]: time="2026-01-05T14:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:39:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21257 "" "Go-http-client/1.1"
Jan 05 14:39:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.821 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.821 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.849 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.862 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.863 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:39:29 compute-0 nova_compute[185474]: 2026-01-05 14:39:29.863 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:39:30 compute-0 sudo[221078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adelqjfbzhhnufumyfmoyadsotyxprxi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623970.1765726-415-92789397268082/AnsiballZ_edpm_container_manage.py'
Jan 05 14:39:30 compute-0 sudo[221078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:31 compute-0 python3[221080]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:39:31 compute-0 podman[221117]: 2026-01-05 14:39:31.391811731 +0000 UTC m=+0.082483554 container create 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:39:31 compute-0 podman[221117]: 2026-01-05 14:39:31.348083683 +0000 UTC m=+0.038755566 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 05 14:39:31 compute-0 python3[221080]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 05 14:39:31 compute-0 openstack_network_exporter[205179]: ERROR   14:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:39:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:39:31 compute-0 openstack_network_exporter[205179]: ERROR   14:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:39:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:39:31 compute-0 sudo[221078]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:31 compute-0 nova_compute[185474]: 2026-01-05 14:39:31.863 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:39:31 compute-0 nova_compute[185474]: 2026-01-05 14:39:31.864 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:39:31 compute-0 nova_compute[185474]: 2026-01-05 14:39:31.864 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:39:31 compute-0 nova_compute[185474]: 2026-01-05 14:39:31.895 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:39:32 compute-0 sudo[221305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgelvxnxeuqsfqavsptvohyvbmngvpuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623971.8644688-423-163043423992565/AnsiballZ_stat.py'
Jan 05 14:39:32 compute-0 sudo[221305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:32 compute-0 python3.9[221307]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:32 compute-0 sudo[221305]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:33 compute-0 sudo[221459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpsvceojfhpxmzrybklvfbtezhjrsbje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623972.9032552-432-133483911973059/AnsiballZ_file.py'
Jan 05 14:39:33 compute-0 sudo[221459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:33 compute-0 python3.9[221461]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:33 compute-0 sudo[221459]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:33 compute-0 sudo[221535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prrvmzhywsexjhnvyyekzvuvhvaxrhua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623972.9032552-432-133483911973059/AnsiballZ_stat.py'
Jan 05 14:39:33 compute-0 sudo[221535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:34 compute-0 python3.9[221537]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:34 compute-0 sudo[221535]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:34 compute-0 sudo[221686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltbmtrntrtfkvjqhnhfdfwpxbooszgwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623974.1050396-432-14893931903336/AnsiballZ_copy.py'
Jan 05 14:39:34 compute-0 sudo[221686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:34 compute-0 python3.9[221688]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623974.1050396-432-14893931903336/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:34 compute-0 sudo[221686]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:35 compute-0 sudo[221762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkgjsxcdhemmbgvtchhqxtqgncagopf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623974.1050396-432-14893931903336/AnsiballZ_systemd.py'
Jan 05 14:39:35 compute-0 sudo[221762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:35 compute-0 python3.9[221764]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:39:35 compute-0 systemd[1]: Reloading.
Jan 05 14:39:36 compute-0 systemd-rc-local-generator[221792]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:39:36 compute-0 systemd-sysv-generator[221796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:39:36 compute-0 sudo[221762]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:36 compute-0 sudo[221874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgymfepiugykmmjlftizniplrwazjivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623974.1050396-432-14893931903336/AnsiballZ_systemd.py'
Jan 05 14:39:36 compute-0 sudo[221874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:37 compute-0 python3.9[221876]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:39:37 compute-0 systemd[1]: Reloading.
Jan 05 14:39:37 compute-0 systemd-rc-local-generator[221908]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:39:37 compute-0 systemd-sysv-generator[221912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:39:37 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 05 14:39:37 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 14:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 14:39:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.
Jan 05 14:39:37 compute-0 podman[221916]: 2026-01-05 14:39:37.758669394 +0000 UTC m=+0.203340800 container init 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + sudo -E kolla_set_configs
Jan 05 14:39:37 compute-0 podman[221916]: 2026-01-05 14:39:37.792084445 +0000 UTC m=+0.236755811 container start 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 14:39:37 compute-0 podman[221916]: ceilometer_agent_ipmi
Jan 05 14:39:37 compute-0 sudo[221937]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 14:39:37 compute-0 sudo[221937]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:39:37 compute-0 sudo[221937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:39:37 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 05 14:39:37 compute-0 sudo[221874]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Validating config file
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Copying service configuration files
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: INFO:__main__:Writing out command to execute
Jan 05 14:39:37 compute-0 sudo[221937]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: ++ cat /run_command
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + ARGS=
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + sudo kolla_copy_cacerts
Jan 05 14:39:37 compute-0 sudo[221963]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 14:39:37 compute-0 sudo[221963]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:39:37 compute-0 sudo[221963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:39:37 compute-0 sudo[221963]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + [[ ! -n '' ]]
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + . kolla_extend_start
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + umask 0022
Jan 05 14:39:37 compute-0 ceilometer_agent_ipmi[221931]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 05 14:39:37 compute-0 podman[221938]: 2026-01-05 14:39:37.912056745 +0000 UTC m=+0.102268152 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 14:39:37 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-3364d9a6d87b4d85.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:39:37 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-3364d9a6d87b4d85.service: Failed with result 'exit-code'.
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.773 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.774 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.775 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.776 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.777 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.778 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.779 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.780 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.781 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.782 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.783 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.784 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.785 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.786 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.787 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.788 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.808 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.809 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.809 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 14:39:38 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:38.887 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpesmo2um7/privsep.sock']
Jan 05 14:39:38 compute-0 sudo[222116]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpesmo2um7/privsep.sock
Jan 05 14:39:38 compute-0 sudo[222116]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:39:38 compute-0 sudo[222116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:39:39 compute-0 python3.9[222114]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:39:39 compute-0 sudo[222116]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.570 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.571 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpesmo2um7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.455 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.460 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.462 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.462 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 05 14:39:39 compute-0 podman[222194]: 2026-01-05 14:39:39.580544026 +0000 UTC m=+0.068846231 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.680 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.680 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.682 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.682 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.683 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.683 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.683 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.683 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.684 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.684 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.684 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.684 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.685 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.690 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.690 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.690 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.690 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.691 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.691 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.691 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.691 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.691 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.692 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.692 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.692 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.692 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.693 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.693 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.693 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.693 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.693 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.694 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.694 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.694 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.694 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.694 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.695 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.695 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.695 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.695 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.695 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.696 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.697 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.697 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.697 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.697 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.698 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.698 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.698 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.699 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.699 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.699 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.700 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.701 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.701 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.701 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.701 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.701 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.702 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.702 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.702 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.702 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.702 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.703 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.703 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.703 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.703 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.703 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.704 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.704 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.704 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.704 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.704 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.705 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.705 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.705 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.705 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.705 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.706 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.706 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.706 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.706 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.706 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.707 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.707 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.707 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.707 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.707 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.708 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.708 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.708 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.708 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.708 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.709 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.709 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.709 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.709 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.709 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.710 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.711 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.711 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.711 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.711 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.712 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.712 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.712 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.712 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.712 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.713 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.713 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.713 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.713 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.713 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.714 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.714 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.714 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.714 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.714 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.715 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.716 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.717 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.718 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.719 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.720 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.721 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.722 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.723 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.724 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.725 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 05 14:39:39 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:39:39.729 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 05 14:39:39 compute-0 sudo[222294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuxwmveylqnczsrgdifglpxvuyybqyfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623979.4041593-473-72311802676758/AnsiballZ_stat.py'
Jan 05 14:39:39 compute-0 sudo[222294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:39 compute-0 python3.9[222296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:39 compute-0 sudo[222294]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:40 compute-0 sudo[222419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfmkfssgaxqtfadkrfjcnwfwplyqufni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623979.4041593-473-72311802676758/AnsiballZ_copy.py'
Jan 05 14:39:40 compute-0 sudo[222419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:40 compute-0 python3.9[222421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623979.4041593-473-72311802676758/.source.yaml _original_basename=.y0zfykbf follow=False checksum=c50e01d3a3ae56861dd633516bdbae664e43caba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:40 compute-0 sudo[222419]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:41 compute-0 sudo[222571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-judiztnzcxtrgelhebqyrbzwxvhkxxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623981.1137986-490-101866370192427/AnsiballZ_file.py'
Jan 05 14:39:41 compute-0 sudo[222571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:41 compute-0 python3.9[222573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:41 compute-0 sudo[222571]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:42 compute-0 sudo[222723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgxgpfqnqzavterpjxqqnkteyopacais ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623982.0214713-498-95436525137335/AnsiballZ_file.py'
Jan 05 14:39:42 compute-0 sudo[222723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:42 compute-0 python3.9[222725]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 14:39:42 compute-0 sudo[222723]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:43 compute-0 python3.9[222875]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:39:44.790 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:39:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:39:44.791 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:39:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:39:44.791 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:39:45 compute-0 sudo[223296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhojvplmzcbrulcqexaokvbrfqwilgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623985.143578-532-164948715050883/AnsiballZ_container_config_data.py'
Jan 05 14:39:45 compute-0 sudo[223296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:45 compute-0 python3.9[223298]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 05 14:39:45 compute-0 sudo[223296]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:46 compute-0 sudo[223448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqudcjnremiimlxyrtdrshczgqfiypbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623986.242308-543-25414748997148/AnsiballZ_container_config_hash.py'
Jan 05 14:39:46 compute-0 sudo[223448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:46 compute-0 python3.9[223450]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 14:39:46 compute-0 sudo[223448]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:47 compute-0 sudo[223611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqiliofiphycbfawolnkouxachxvyhae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623987.2117705-552-118051998234386/AnsiballZ_podman_container_info.py'
Jan 05 14:39:47 compute-0 sudo[223611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:47 compute-0 podman[223574]: 2026-01-05 14:39:47.617756713 +0000 UTC m=+0.096930770 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git)
Jan 05 14:39:47 compute-0 python3.9[223619]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 14:39:48 compute-0 sudo[223611]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:48 compute-0 podman[223673]: 2026-01-05 14:39:48.638529753 +0000 UTC m=+0.126160030 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 05 14:39:49 compute-0 sudo[223824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgyvbblapvnpuebxarunlwkyqlhfktiw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767623988.9187624-565-94086097521716/AnsiballZ_edpm_container_manage.py'
Jan 05 14:39:49 compute-0 sudo[223824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:49 compute-0 python3[223826]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 14:39:49 compute-0 podman[223866]: 2026-01-05 14:39:49.943750396 +0000 UTC m=+0.093396657 container create 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 05 14:39:49 compute-0 podman[223866]: 2026-01-05 14:39:49.899995408 +0000 UTC m=+0.049641729 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 05 14:39:49 compute-0 python3[223826]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 05 14:39:50 compute-0 sudo[223824]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:50 compute-0 sudo[224054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbolyocvaiipeawvqotfgemalmvsbruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623990.4119165-573-10044452029669/AnsiballZ_stat.py'
Jan 05 14:39:50 compute-0 sudo[224054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:50 compute-0 python3.9[224056]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:50 compute-0 sudo[224054]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:51 compute-0 sudo[224208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxthbucfiuoazdqtcdjceosrjxuqurwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623991.3431864-582-46516817875154/AnsiballZ_file.py'
Jan 05 14:39:51 compute-0 sudo[224208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:51 compute-0 python3.9[224210]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:51 compute-0 sudo[224208]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:52 compute-0 sudo[224284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikrewzpudoxydfatechcskemdadztck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623991.3431864-582-46516817875154/AnsiballZ_stat.py'
Jan 05 14:39:52 compute-0 sudo[224284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:52 compute-0 podman[224286]: 2026-01-05 14:39:52.436677504 +0000 UTC m=+0.068523920 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 14:39:52 compute-0 python3.9[224287]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:39:52 compute-0 sudo[224284]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:53 compute-0 podman[224429]: 2026-01-05 14:39:53.245670557 +0000 UTC m=+0.075792187 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:39:53 compute-0 sudo[224470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwwizurntbzsigfnxqsliwiyprusbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623992.6202114-582-242330896010455/AnsiballZ_copy.py'
Jan 05 14:39:53 compute-0 sudo[224470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:53 compute-0 python3.9[224479]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767623992.6202114-582-242330896010455/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:53 compute-0 sudo[224470]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:53 compute-0 sudo[224553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yosnnxluddjxtdxdewikhuuwprsyrift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623992.6202114-582-242330896010455/AnsiballZ_systemd.py'
Jan 05 14:39:53 compute-0 sudo[224553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:54 compute-0 python3.9[224555]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 14:39:54 compute-0 systemd[1]: Reloading.
Jan 05 14:39:54 compute-0 systemd-rc-local-generator[224578]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:39:54 compute-0 systemd-sysv-generator[224582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:39:54 compute-0 sudo[224553]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:54 compute-0 sudo[224664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxkwsxcflhlibuimtyssqsfcltevind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623992.6202114-582-242330896010455/AnsiballZ_systemd.py'
Jan 05 14:39:54 compute-0 sudo[224664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:55 compute-0 python3.9[224666]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 14:39:55 compute-0 systemd[1]: Reloading.
Jan 05 14:39:55 compute-0 systemd-rc-local-generator[224696]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 14:39:55 compute-0 systemd-sysv-generator[224700]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 14:39:55 compute-0 systemd[1]: Starting kepler container...
Jan 05 14:39:55 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:39:55 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.
Jan 05 14:39:55 compute-0 podman[224706]: 2026-01-05 14:39:55.68229481 +0000 UTC m=+0.154162395 container init 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0)
Jan 05 14:39:55 compute-0 kepler[224719]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 14:39:55 compute-0 podman[224706]: 2026-01-05 14:39:55.721933975 +0000 UTC m=+0.193801520 container start 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, release-0.7.12=, config_id=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., container_name=kepler)
Jan 05 14:39:55 compute-0 podman[224706]: kepler
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.729576       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.729827       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.729854       1 config.go:295] kernel version: 5.14
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.730993       1 power.go:78] Unable to obtain power, use estimate method
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.731037       1 redfish.go:169] failed to get redfish credential file path
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.731832       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.731886       1 power.go:79] using none to obtain power
Jan 05 14:39:55 compute-0 kepler[224719]: E0105 14:39:55.731916       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 05 14:39:55 compute-0 kepler[224719]: E0105 14:39:55.731959       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 05 14:39:55 compute-0 kepler[224719]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 14:39:55 compute-0 systemd[1]: Started kepler container.
Jan 05 14:39:55 compute-0 kepler[224719]: I0105 14:39:55.735552       1 exporter.go:84] Number of CPUs: 8
Jan 05 14:39:55 compute-0 sudo[224664]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:55 compute-0 podman[224731]: 2026-01-05 14:39:55.839687551 +0000 UTC m=+0.096899671 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container)
Jan 05 14:39:55 compute-0 systemd[1]: 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-2f52f392d2aa0aa6.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:39:55 compute-0 systemd[1]: 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-2f52f392d2aa0aa6.service: Failed with result 'exit-code'.
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.213252       1 watcher.go:83] Using in cluster k8s config
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.214901       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 05 14:39:56 compute-0 kepler[224719]: E0105 14:39:56.215245       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.222259       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.222675       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.229689       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.230098       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.239672       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.240074       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.240342       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.250355       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.250743       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.250962       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.251176       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.251445       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.251675       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.251965       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.252250       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.252497       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.252754       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.253052       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 05 14:39:56 compute-0 kepler[224719]: I0105 14:39:56.253873       1 exporter.go:208] Started Kepler in 524.785222ms
Jan 05 14:39:56 compute-0 python3.9[224915]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 14:39:57 compute-0 sudo[225065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzxnlkdmztbrfbxecyrfncqqbzekkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623997.370567-623-143914063243220/AnsiballZ_stat.py'
Jan 05 14:39:57 compute-0 sudo[225065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:58 compute-0 python3.9[225067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:39:58 compute-0 sudo[225065]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:58 compute-0 sudo[225190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmlnyurjrssbfrfiulypkmarxacyxji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623997.370567-623-143914063243220/AnsiballZ_copy.py'
Jan 05 14:39:58 compute-0 sudo[225190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:39:58 compute-0 python3.9[225192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767623997.370567-623-143914063243220/.source.yaml _original_basename=.01qb_eh5 follow=False checksum=de09a1e32bfc6a71de93eee806f407ca37c91fbd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:39:59 compute-0 sudo[225190]: pam_unix(sudo:session): session closed for user root
Jan 05 14:39:59 compute-0 podman[201880]: time="2026-01-05T14:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:39:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27281 "" "Go-http-client/1.1"
Jan 05 14:39:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Jan 05 14:39:59 compute-0 sudo[225358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxsoifzbrjaqemrzlwoftpqqoxgyjiln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767623999.3059509-638-128168906523566/AnsiballZ_systemd.py'
Jan 05 14:39:59 compute-0 podman[225316]: 2026-01-05 14:39:59.871521032 +0000 UTC m=+0.119228976 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:39:59 compute-0 sudo[225358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:00 compute-0 python3.9[225367]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:40:00 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 05 14:40:00 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:40:00.351 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 05 14:40:00 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:40:00.453 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 05 14:40:00 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:40:00.453 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 05 14:40:00 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:40:00.454 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 05 14:40:00 compute-0 ceilometer_agent_ipmi[221931]: 2026-01-05 14:40:00.467 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 05 14:40:00 compute-0 systemd[1]: libpod-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope: Deactivated successfully.
Jan 05 14:40:00 compute-0 systemd[1]: libpod-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope: Consumed 2.185s CPU time.
Jan 05 14:40:00 compute-0 podman[225371]: 2026-01-05 14:40:00.663289268 +0000 UTC m=+0.392007369 container died 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 14:40:00 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-3364d9a6d87b4d85.timer: Deactivated successfully.
Jan 05 14:40:00 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.
Jan 05 14:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-userdata-shm.mount: Deactivated successfully.
Jan 05 14:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb-merged.mount: Deactivated successfully.
Jan 05 14:40:00 compute-0 podman[225371]: 2026-01-05 14:40:00.746940648 +0000 UTC m=+0.475658719 container cleanup 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:40:00 compute-0 podman[225371]: ceilometer_agent_ipmi
Jan 05 14:40:00 compute-0 podman[225400]: ceilometer_agent_ipmi
Jan 05 14:40:00 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 05 14:40:00 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 05 14:40:00 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 05 14:40:01 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 14:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0b4f97b997a7aca79aa6ceee99d69b8bb3294bcb3c6a0a2e3443326081898fb/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 14:40:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.
Jan 05 14:40:01 compute-0 podman[225411]: 2026-01-05 14:40:01.162447763 +0000 UTC m=+0.250430676 container init 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + sudo -E kolla_set_configs
Jan 05 14:40:01 compute-0 podman[225411]: 2026-01-05 14:40:01.211746232 +0000 UTC m=+0.299729115 container start 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:40:01 compute-0 sudo[225432]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 14:40:01 compute-0 sudo[225432]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:40:01 compute-0 sudo[225432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:40:01 compute-0 podman[225411]: ceilometer_agent_ipmi
Jan 05 14:40:01 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 05 14:40:01 compute-0 sudo[225358]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Validating config file
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Copying service configuration files
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: INFO:__main__:Writing out command to execute
Jan 05 14:40:01 compute-0 sudo[225432]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: ++ cat /run_command
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + ARGS=
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + sudo kolla_copy_cacerts
Jan 05 14:40:01 compute-0 sudo[225452]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 14:40:01 compute-0 sudo[225452]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:40:01 compute-0 sudo[225452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:40:01 compute-0 podman[225433]: 2026-01-05 14:40:01.338166252 +0000 UTC m=+0.106233874 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 05 14:40:01 compute-0 sudo[225452]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:01 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-6c3fa6e17692aa66.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:40:01 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-6c3fa6e17692aa66.service: Failed with result 'exit-code'.
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + [[ ! -n '' ]]
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + . kolla_extend_start
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + umask 0022
Jan 05 14:40:01 compute-0 ceilometer_agent_ipmi[225426]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 05 14:40:01 compute-0 openstack_network_exporter[205179]: ERROR   14:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:40:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:40:01 compute-0 openstack_network_exporter[205179]: ERROR   14:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:40:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:40:02 compute-0 sudo[225606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwqeabauxheyxkuxsytxgudsgmbedfgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624001.596571-646-241508143083620/AnsiballZ_systemd.py'
Jan 05 14:40:02 compute-0 sudo[225606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.286 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.287 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.288 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.289 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.290 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.291 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.292 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.293 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.294 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.295 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.296 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.297 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.298 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.299 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.300 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.301 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.302 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.302 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.302 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.327 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.329 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.331 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 14:40:02 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.358 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpo2wl65rk/privsep.sock']
Jan 05 14:40:02 compute-0 sudo[225613]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpo2wl65rk/privsep.sock
Jan 05 14:40:02 compute-0 sudo[225613]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 14:40:02 compute-0 sudo[225613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 14:40:02 compute-0 python3.9[225608]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:40:02 compute-0 systemd[1]: Stopping kepler container...
Jan 05 14:40:02 compute-0 kepler[224719]: I0105 14:40:02.673386       1 exporter.go:218] Received shutdown signal
Jan 05 14:40:02 compute-0 kepler[224719]: I0105 14:40:02.673854       1 exporter.go:226] Exiting...
Jan 05 14:40:02 compute-0 systemd[1]: libpod-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.scope: Deactivated successfully.
Jan 05 14:40:02 compute-0 podman[225619]: 2026-01-05 14:40:02.867539414 +0000 UTC m=+0.270449010 container died 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, container_name=kepler, version=9.4, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30)
Jan 05 14:40:02 compute-0 systemd[1]: 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-2f52f392d2aa0aa6.timer: Deactivated successfully.
Jan 05 14:40:02 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.
Jan 05 14:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-userdata-shm.mount: Deactivated successfully.
Jan 05 14:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-21ee2337e5ebcab8f10aea8be45903b7747235835cdf6c72b6f452b9eb1600da-merged.mount: Deactivated successfully.
Jan 05 14:40:02 compute-0 podman[225619]: 2026-01-05 14:40:02.91383363 +0000 UTC m=+0.316743226 container cleanup 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container)
Jan 05 14:40:02 compute-0 podman[225619]: kepler
Jan 05 14:40:03 compute-0 podman[225643]: kepler
Jan 05 14:40:03 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 05 14:40:03 compute-0 systemd[1]: Stopped kepler container.
Jan 05 14:40:03 compute-0 systemd[1]: Starting kepler container...
Jan 05 14:40:03 compute-0 sudo[225613]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.041 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.042 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpo2wl65rk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.933 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.940 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.945 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:02.945 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 05 14:40:03 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.172 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.174 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.174 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.174 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.175 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.176 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.176 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.179 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.180 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.181 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.182 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.183 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.184 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.185 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.186 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.187 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.188 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.189 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 podman[225654]: 2026-01-05 14:40:03.189680386 +0000 UTC m=+0.143143246 container init 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, release-0.7.12=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, container_name=kepler, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.190 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.191 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.191 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.191 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.191 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.191 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.192 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.193 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.194 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.195 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.196 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.197 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.198 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.199 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.200 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.201 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.201 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.201 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.201 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.201 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.202 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.203 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.204 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.205 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.206 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.207 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.208 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.208 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.208 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.208 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.208 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 05 14:40:03 compute-0 ceilometer_agent_ipmi[225426]: 2026-01-05 14:40:03.212 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 05 14:40:03 compute-0 kepler[225671]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 14:40:03 compute-0 podman[225654]: 2026-01-05 14:40:03.222505486 +0000 UTC m=+0.175968326 container start 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, release-0.7.12=)
Jan 05 14:40:03 compute-0 podman[225654]: kepler
Jan 05 14:40:03 compute-0 systemd[1]: Started kepler container.
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.235809       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.236000       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.236022       1 config.go:295] kernel version: 5.14
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.236742       1 power.go:78] Unable to obtain power, use estimate method
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.236781       1 redfish.go:169] failed to get redfish credential file path
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.237339       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.237358       1 power.go:79] using none to obtain power
Jan 05 14:40:03 compute-0 kepler[225671]: E0105 14:40:03.237379       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 05 14:40:03 compute-0 kepler[225671]: E0105 14:40:03.237408       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 05 14:40:03 compute-0 kepler[225671]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.240273       1 exporter.go:84] Number of CPUs: 8
Jan 05 14:40:03 compute-0 sudo[225606]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:03 compute-0 podman[225683]: 2026-01-05 14:40:03.366468474 +0000 UTC m=+0.127164173 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.29.0, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 05 14:40:03 compute-0 systemd[1]: 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-830eada58663988.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:40:03 compute-0 systemd[1]: 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510-830eada58663988.service: Failed with result 'exit-code'.
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.858308       1 watcher.go:83] Using in cluster k8s config
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.858370       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 05 14:40:03 compute-0 kepler[225671]: E0105 14:40:03.858450       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.866392       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.866448       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 05 14:40:03 compute-0 sudo[225854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szjcucqtmhhreedvyjuvjoajgownuezt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624003.500613-654-152748755339300/AnsiballZ_find.py'
Jan 05 14:40:03 compute-0 sudo[225854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.874467       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.874528       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.885649       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.885708       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.885730       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902140       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902271       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902282       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902290       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902300       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902318       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902404       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902445       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902476       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902503       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.902676       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 05 14:40:03 compute-0 kepler[225671]: I0105 14:40:03.903753       1 exporter.go:208] Started Kepler in 668.235603ms
Jan 05 14:40:04 compute-0 python3.9[225864]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 14:40:04 compute-0 sudo[225854]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:05 compute-0 sudo[226016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjtogizihvhzddtjcoqrggelcaljain ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624004.6820424-664-186247415713775/AnsiballZ_podman_container_info.py'
Jan 05 14:40:05 compute-0 sudo[226016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:05 compute-0 python3.9[226018]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 05 14:40:05 compute-0 sudo[226016]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:06 compute-0 sudo[226181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvqjjtjuspvrwmxtwwdbeeezyimxjkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624006.0170095-672-51338847530455/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:06 compute-0 sudo[226181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:06 compute-0 python3.9[226183]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:07 compute-0 systemd[1]: Started libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope.
Jan 05 14:40:07 compute-0 podman[226184]: 2026-01-05 14:40:07.164477338 +0000 UTC m=+0.142421004 container exec eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 14:40:07 compute-0 podman[226184]: 2026-01-05 14:40:07.174613134 +0000 UTC m=+0.152556800 container exec_died eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 14:40:07 compute-0 sudo[226181]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:07 compute-0 systemd[1]: libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope: Deactivated successfully.
Jan 05 14:40:08 compute-0 sudo[226361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdqblsdsmuyvzavexmcnsgjmmupfsro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624007.5687468-680-150440898173426/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:08 compute-0 sudo[226361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:08 compute-0 python3.9[226363]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:08 compute-0 systemd[1]: Started libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope.
Jan 05 14:40:08 compute-0 podman[226364]: 2026-01-05 14:40:08.450712913 +0000 UTC m=+0.140544255 container exec eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 14:40:08 compute-0 podman[226364]: 2026-01-05 14:40:08.484776338 +0000 UTC m=+0.174607670 container exec_died eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 14:40:08 compute-0 sudo[226361]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:08 compute-0 systemd[1]: libpod-conmon-eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c.scope: Deactivated successfully.
Jan 05 14:40:09 compute-0 sudo[226540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmkjzindycnvftysrzcfjjrqweinhau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624008.889852-688-131730352120891/AnsiballZ_file.py'
Jan 05 14:40:09 compute-0 sudo[226540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:09 compute-0 python3.9[226542]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:09 compute-0 sudo[226540]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:10 compute-0 sudo[226709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewtobshdpwnnqdneraytbztzpzkbhtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624010.0667748-697-65758528607413/AnsiballZ_podman_container_info.py'
Jan 05 14:40:10 compute-0 podman[226669]: 2026-01-05 14:40:10.64062367 +0000 UTC m=+0.117453448 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 14:40:10 compute-0 sudo[226709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:10 compute-0 python3.9[226714]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 05 14:40:10 compute-0 sudo[226709]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:11 compute-0 sudo[226874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghfmrhvtklnsljdszexljzwsrwsgoug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624011.3126805-705-14708245271491/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:11 compute-0 sudo[226874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:12 compute-0 python3.9[226876]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:12 compute-0 systemd[1]: Started libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope.
Jan 05 14:40:12 compute-0 podman[226877]: 2026-01-05 14:40:12.26277723 +0000 UTC m=+0.153093615 container exec c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:40:12 compute-0 podman[226877]: 2026-01-05 14:40:12.270995953 +0000 UTC m=+0.161312328 container exec_died c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 14:40:12 compute-0 sudo[226874]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:12 compute-0 systemd[1]: libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope: Deactivated successfully.
Jan 05 14:40:13 compute-0 sudo[227058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtaqmrtdspvktjwyatcuppbzkwkfvgig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624012.6393502-713-122606415635337/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:13 compute-0 sudo[227058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:13 compute-0 python3.9[227060]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:13 compute-0 systemd[1]: Started libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope.
Jan 05 14:40:13 compute-0 podman[227061]: 2026-01-05 14:40:13.521629952 +0000 UTC m=+0.126841764 container exec c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 14:40:13 compute-0 podman[227061]: 2026-01-05 14:40:13.530848622 +0000 UTC m=+0.136060424 container exec_died c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 14:40:13 compute-0 sudo[227058]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:13 compute-0 systemd[1]: libpod-conmon-c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827.scope: Deactivated successfully.
Jan 05 14:40:14 compute-0 sudo[227239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efriboafycylttqublhiduevggyrubzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624013.9133697-721-237551834527194/AnsiballZ_file.py'
Jan 05 14:40:14 compute-0 sudo[227239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:14 compute-0 python3.9[227241]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:14 compute-0 sudo[227239]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:15 compute-0 sudo[227391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvljapkeocozitpdljpzkryjmorovxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624015.0522668-730-156713063244391/AnsiballZ_podman_container_info.py'
Jan 05 14:40:15 compute-0 sudo[227391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:15 compute-0 python3.9[227393]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 05 14:40:15 compute-0 sudo[227391]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:16 compute-0 sudo[227556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbnexvciqrfdwygqvglpgkpoynmecdwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624016.1398244-738-183017456213972/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:16 compute-0 sudo[227556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:16 compute-0 python3.9[227558]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:17 compute-0 systemd[1]: Started libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope.
Jan 05 14:40:17 compute-0 podman[227559]: 2026-01-05 14:40:17.111010336 +0000 UTC m=+0.172085110 container exec 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 14:40:17 compute-0 podman[227559]: 2026-01-05 14:40:17.146350975 +0000 UTC m=+0.207425319 container exec_died 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 05 14:40:17 compute-0 sudo[227556]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:17 compute-0 systemd[1]: libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope: Deactivated successfully.
Jan 05 14:40:18 compute-0 sudo[227752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mparwrgtwwmzzzlswxauijjeocpjapsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624017.5392065-746-31666742914976/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:18 compute-0 sudo[227752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:18 compute-0 podman[227711]: 2026-01-05 14:40:18.078868481 +0000 UTC m=+0.133875064 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 14:40:18 compute-0 python3.9[227758]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:18 compute-0 systemd[1]: Started libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope.
Jan 05 14:40:18 compute-0 podman[227759]: 2026-01-05 14:40:18.450542416 +0000 UTC m=+0.132753393 container exec 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 05 14:40:18 compute-0 podman[227759]: 2026-01-05 14:40:18.486017919 +0000 UTC m=+0.168228886 container exec_died 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:40:18 compute-0 sudo[227752]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:18 compute-0 systemd[1]: libpod-conmon-7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1.scope: Deactivated successfully.
Jan 05 14:40:19 compute-0 sudo[227952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofvqszlltoroiokdaphippnxdlkeruvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624018.8647795-754-70964422030014/AnsiballZ_file.py'
Jan 05 14:40:19 compute-0 sudo[227952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:19 compute-0 podman[227913]: 2026-01-05 14:40:19.641820124 +0000 UTC m=+0.209743072 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 14:40:19 compute-0 python3.9[227958]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:19 compute-0 sudo[227952]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:20 compute-0 sudo[228118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpkfjrcvbjwyngyxykqsrimshuvimyvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624020.1308804-763-219062462493271/AnsiballZ_podman_container_info.py'
Jan 05 14:40:20 compute-0 sudo[228118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:20 compute-0 python3.9[228120]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 05 14:40:21 compute-0 sudo[228118]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:22 compute-0 sudo[228280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fipihdbkjhwuriplvwlukdazcjwjixnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624021.456509-771-49320224194283/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:22 compute-0 sudo[228280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:22 compute-0 python3.9[228282]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:22 compute-0 systemd[1]: Started libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope.
Jan 05 14:40:22 compute-0 podman[228283]: 2026-01-05 14:40:22.447100471 +0000 UTC m=+0.181057555 container exec fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:40:22 compute-0 podman[228283]: 2026-01-05 14:40:22.483305903 +0000 UTC m=+0.217262947 container exec_died fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:40:22 compute-0 systemd[1]: libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope: Deactivated successfully.
Jan 05 14:40:22 compute-0 sudo[228280]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:22 compute-0 podman[228308]: 2026-01-05 14:40:22.663297937 +0000 UTC m=+0.134012767 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 05 14:40:23 compute-0 sudo[228492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaiwpwivyzusvwfgpahjdyscsnwyjhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624022.8623083-779-128007330089783/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:23 compute-0 sudo[228492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:23 compute-0 podman[228453]: 2026-01-05 14:40:23.432681736 +0000 UTC m=+0.115236578 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:40:23 compute-0 python3.9[228503]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:23 compute-0 systemd[1]: Started libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope.
Jan 05 14:40:23 compute-0 podman[228504]: 2026-01-05 14:40:23.803365065 +0000 UTC m=+0.128978031 container exec fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:40:23 compute-0 podman[228504]: 2026-01-05 14:40:23.838449018 +0000 UTC m=+0.164061934 container exec_died fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:40:23 compute-0 systemd[1]: libpod-conmon-fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5.scope: Deactivated successfully.
Jan 05 14:40:23 compute-0 sudo[228492]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:24 compute-0 sudo[228681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jecjribkfxrlrlcdlgdxefguyjzcwsbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624024.359651-787-173903571230945/AnsiballZ_file.py'
Jan 05 14:40:24 compute-0 sudo[228681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:25 compute-0 python3.9[228683]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:25 compute-0 sudo[228681]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.428 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.430 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.430 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 14:40:25 compute-0 nova_compute[185474]: 2026-01-05 14:40:25.453 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:25 compute-0 sudo[228833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szhehhnicadgzevfbuotqlqjbcfqahmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624025.4838696-796-211039548696962/AnsiballZ_podman_container_info.py'
Jan 05 14:40:26 compute-0 sudo[228833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:26 compute-0 python3.9[228835]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 05 14:40:26 compute-0 sudo[228833]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:27 compute-0 sudo[228997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmysbwgtizxrrqsqnxxravhktyajbpsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624026.672019-804-213316692226171/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:27 compute-0 sudo[228997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:27 compute-0 python3.9[228999]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:27 compute-0 nova_compute[185474]: 2026-01-05 14:40:27.468 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:27 compute-0 systemd[1]: Started libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope.
Jan 05 14:40:27 compute-0 podman[229000]: 2026-01-05 14:40:27.659426136 +0000 UTC m=+0.173699855 container exec 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:40:27 compute-0 podman[229000]: 2026-01-05 14:40:27.69344846 +0000 UTC m=+0.207722159 container exec_died 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:40:27 compute-0 sudo[228997]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:27 compute-0 systemd[1]: libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope: Deactivated successfully.
Jan 05 14:40:28 compute-0 nova_compute[185474]: 2026-01-05 14:40:28.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:28 compute-0 nova_compute[185474]: 2026-01-05 14:40:28.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:28 compute-0 sudo[229178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spobgndxhchjnsatffpaegfkxaaqhwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624028.0664806-812-207161003915241/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:28 compute-0 sudo[229178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:28 compute-0 python3.9[229180]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:28 compute-0 systemd[1]: Started libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope.
Jan 05 14:40:29 compute-0 podman[229181]: 2026-01-05 14:40:29.012894116 +0000 UTC m=+0.143734723 container exec 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:40:29 compute-0 podman[229181]: 2026-01-05 14:40:29.046938959 +0000 UTC m=+0.177779516 container exec_died 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:40:29 compute-0 systemd[1]: libpod-conmon-07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b.scope: Deactivated successfully.
Jan 05 14:40:29 compute-0 sudo[229178]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:29 compute-0 nova_compute[185474]: 2026-01-05 14:40:29.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:29 compute-0 nova_compute[185474]: 2026-01-05 14:40:29.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:29 compute-0 nova_compute[185474]: 2026-01-05 14:40:29.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:29 compute-0 podman[201880]: time="2026-01-05T14:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:40:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 05 14:40:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Jan 05 14:40:29 compute-0 sudo[229360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caokvkjmonwumzfpclbiyhqtxzujuqsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624029.4109867-820-100906168760332/AnsiballZ_file.py'
Jan 05 14:40:30 compute-0 sudo[229360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:30 compute-0 podman[229362]: 2026-01-05 14:40:30.169548063 +0000 UTC m=+0.151121401 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:40:30 compute-0 python3.9[229363]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:30 compute-0 sudo[229360]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:30 compute-0 nova_compute[185474]: 2026-01-05 14:40:30.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:30 compute-0 nova_compute[185474]: 2026-01-05 14:40:30.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:40:31 compute-0 sudo[229536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocalkuymioidiqgjbguggxysmuboqdgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624030.5991292-829-279876346578472/AnsiballZ_podman_container_info.py'
Jan 05 14:40:31 compute-0 sudo[229536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:31 compute-0 python3.9[229538]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:40:31 compute-0 openstack_network_exporter[205179]: ERROR   14:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:40:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:40:31 compute-0 openstack_network_exporter[205179]: ERROR   14:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:40:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.426 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.426 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.460 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.460 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.460 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.460 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:40:31 compute-0 sudo[229536]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:31 compute-0 podman[229551]: 2026-01-05 14:40:31.651057206 +0000 UTC m=+0.124781576 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 14:40:31 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-6c3fa6e17692aa66.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 14:40:31 compute-0 systemd[1]: 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec-6c3fa6e17692aa66.service: Failed with result 'exit-code'.
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.964 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.968 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5675MB free_disk=72.47956085205078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.968 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:40:31 compute-0 nova_compute[185474]: 2026-01-05 14:40:31.969 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.158 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.159 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.290 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.433 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 14:40:32 compute-0 sudo[229719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjdjnolpcpgygjwludqqdihynycktow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624031.8877304-837-46440092362427/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.433 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:40:32 compute-0 sudo[229719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.458 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.488 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.525 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.545 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.548 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:40:32 compute-0 nova_compute[185474]: 2026-01-05 14:40:32.548 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:40:32 compute-0 python3.9[229721]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:32 compute-0 systemd[1]: Started libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope.
Jan 05 14:40:32 compute-0 podman[229722]: 2026-01-05 14:40:32.861529315 +0000 UTC m=+0.161615236 container exec 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41)
Jan 05 14:40:32 compute-0 podman[229722]: 2026-01-05 14:40:32.896998108 +0000 UTC m=+0.197083979 container exec_died 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 14:40:32 compute-0 sudo[229719]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:32 compute-0 systemd[1]: libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope: Deactivated successfully.
Jan 05 14:40:33 compute-0 podman[229831]: 2026-01-05 14:40:33.647421022 +0000 UTC m=+0.124893651 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_id=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Jan 05 14:40:33 compute-0 sudo[229919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwycjlciwsmrhqakisxhjkqrbekaooyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624033.2902057-845-107739984686667/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:33 compute-0 sudo[229919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:34 compute-0 python3.9[229921]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:34 compute-0 systemd[1]: Started libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope.
Jan 05 14:40:34 compute-0 podman[229922]: 2026-01-05 14:40:34.294671246 +0000 UTC m=+0.164239798 container exec 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 05 14:40:34 compute-0 podman[229922]: 2026-01-05 14:40:34.330401036 +0000 UTC m=+0.199969588 container exec_died 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, config_id=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 05 14:40:34 compute-0 systemd[1]: libpod-conmon-41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f.scope: Deactivated successfully.
Jan 05 14:40:34 compute-0 sudo[229919]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:35 compute-0 sudo[230100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehdgsqpkcpcxkanvznrtxzrrhkgsrmwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624034.7409153-853-254300114373791/AnsiballZ_file.py'
Jan 05 14:40:35 compute-0 sudo[230100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:35 compute-0 python3.9[230102]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:35 compute-0 sudo[230100]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:36 compute-0 sudo[230252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xppjdpqoziqsnnmybxronyvwxhlvsmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624035.9161959-862-114790886028420/AnsiballZ_podman_container_info.py'
Jan 05 14:40:36 compute-0 sudo[230252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:36 compute-0 python3.9[230254]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 05 14:40:36 compute-0 sudo[230252]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:37 compute-0 sudo[230417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anolyllaxqjjwyqxtyhdklwanumuytnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624037.1766107-870-118068401078687/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:37 compute-0 sudo[230417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.747 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.747 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.752 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb9190dd0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.761 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.763 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.763 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.769 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.769 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.769 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.769 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:40:37.769 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:40:37 compute-0 python3.9[230419]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:38 compute-0 systemd[1]: Started libpod-conmon-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope.
Jan 05 14:40:38 compute-0 podman[230421]: 2026-01-05 14:40:38.081140729 +0000 UTC m=+0.162572013 container exec 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:40:38 compute-0 podman[230421]: 2026-01-05 14:40:38.11507427 +0000 UTC m=+0.196505584 container exec_died 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:40:38 compute-0 systemd[1]: libpod-conmon-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope: Deactivated successfully.
Jan 05 14:40:38 compute-0 sudo[230417]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:38 compute-0 sudo[230599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwybjgkmdzgtogngbplrpurtmxlwimwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624038.4668198-878-50802931957527/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:38 compute-0 sudo[230599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:39 compute-0 python3.9[230601]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:39 compute-0 systemd[1]: Started libpod-conmon-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope.
Jan 05 14:40:39 compute-0 podman[230602]: 2026-01-05 14:40:39.391022315 +0000 UTC m=+0.144447971 container exec 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 05 14:40:39 compute-0 podman[230602]: 2026-01-05 14:40:39.427828203 +0000 UTC m=+0.181253819 container exec_died 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 14:40:39 compute-0 systemd[1]: libpod-conmon-97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec.scope: Deactivated successfully.
Jan 05 14:40:39 compute-0 sudo[230599]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:40 compute-0 sudo[230780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkyajwmpeetfgldlerrabzjzpxpdrpgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624039.7958543-886-271835661451753/AnsiballZ_file.py'
Jan 05 14:40:40 compute-0 sudo[230780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:40 compute-0 python3.9[230782]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:40 compute-0 sudo[230780]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:41 compute-0 sudo[230948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkwhsaoowzkxjozqwtxltzvnogqtlzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624040.9531531-895-205933727584643/AnsiballZ_podman_container_info.py'
Jan 05 14:40:41 compute-0 sudo[230948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:41 compute-0 podman[230906]: 2026-01-05 14:40:41.560946159 +0000 UTC m=+0.139312361 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 14:40:41 compute-0 python3.9[230953]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 05 14:40:41 compute-0 sudo[230948]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:42 compute-0 sudo[231116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwpjoubstaibttqjhexuhfjvelxhxgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624042.2511313-903-26769153135518/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:42 compute-0 sudo[231116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:43 compute-0 python3.9[231118]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:43 compute-0 systemd[1]: Started libpod-conmon-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.scope.
Jan 05 14:40:43 compute-0 podman[231119]: 2026-01-05 14:40:43.265981707 +0000 UTC m=+0.157581657 container exec 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=)
Jan 05 14:40:43 compute-0 podman[231119]: 2026-01-05 14:40:43.303736492 +0000 UTC m=+0.195336452 container exec_died 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=kepler, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:40:43 compute-0 sudo[231116]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:43 compute-0 systemd[1]: libpod-conmon-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.scope: Deactivated successfully.
Jan 05 14:40:44 compute-0 sudo[231297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljpxeohwyhnjczhapoagjxotspdwdbvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624043.7192304-911-280229411720937/AnsiballZ_podman_container_exec.py'
Jan 05 14:40:44 compute-0 sudo[231297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:44 compute-0 python3.9[231299]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 14:40:44 compute-0 systemd[1]: Started libpod-conmon-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.scope.
Jan 05 14:40:44 compute-0 podman[231300]: 2026-01-05 14:40:44.711088782 +0000 UTC m=+0.149242121 container exec 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler)
Jan 05 14:40:44 compute-0 podman[231300]: 2026-01-05 14:40:44.74564673 +0000 UTC m=+0.183799989 container exec_died 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., name=ubi9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Jan 05 14:40:44 compute-0 sudo[231297]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:40:44.790 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:40:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:40:44.794 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:40:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:40:44.795 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:40:44 compute-0 systemd[1]: libpod-conmon-8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510.scope: Deactivated successfully.
Jan 05 14:40:45 compute-0 sudo[231479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhtcxascttchsfitxhlyipjzsoklyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624045.0871654-919-80348740672380/AnsiballZ_file.py'
Jan 05 14:40:45 compute-0 sudo[231479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:45 compute-0 python3.9[231481]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:45 compute-0 sudo[231479]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:46 compute-0 sudo[231631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcetfttdilghnyniginxalbdfdkmhogl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624046.263835-928-76852511055763/AnsiballZ_file.py'
Jan 05 14:40:46 compute-0 sudo[231631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:47 compute-0 python3.9[231633]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:47 compute-0 sudo[231631]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:47 compute-0 sudo[231783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzzhguhaniekqsungkndtlmiiypecxuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624047.3654556-936-8627698102547/AnsiballZ_stat.py'
Jan 05 14:40:47 compute-0 sudo[231783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:48 compute-0 python3.9[231785]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:40:48 compute-0 sudo[231783]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:48 compute-0 podman[231786]: 2026-01-05 14:40:48.318551016 +0000 UTC m=+0.124658893 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 05 14:40:48 compute-0 sudo[231925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhfwandwtcvhsgakjlbctfoxbotxnte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624047.3654556-936-8627698102547/AnsiballZ_copy.py'
Jan 05 14:40:48 compute-0 sudo[231925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:48 compute-0 python3.9[231927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767624047.3654556-936-8627698102547/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:49 compute-0 sudo[231925]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:49 compute-0 sudo[232091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbelqkfujslakpyffbbehrggwbamyjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624049.392381-952-15126716337634/AnsiballZ_file.py'
Jan 05 14:40:49 compute-0 sudo[232091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:49 compute-0 podman[232052]: 2026-01-05 14:40:49.949528905 +0000 UTC m=+0.173979422 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:40:50 compute-0 python3.9[232099]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:50 compute-0 sudo[232091]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:50 compute-0 sudo[232256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcfqbtlhsbvqbmybefykepumknmvyxhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624050.3464272-960-85559769401773/AnsiballZ_stat.py'
Jan 05 14:40:50 compute-0 sudo[232256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:51 compute-0 python3.9[232258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:40:51 compute-0 sudo[232256]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:51 compute-0 sudo[232334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqlutexqrfcatcpanqrnmpmlorehjnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624050.3464272-960-85559769401773/AnsiballZ_file.py'
Jan 05 14:40:51 compute-0 sudo[232334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:51 compute-0 python3.9[232336]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:51 compute-0 sudo[232334]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:52 compute-0 sudo[232486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xypdoymcdjrxpqrozwvbmznxocozjkri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624052.0193112-972-24926242913775/AnsiballZ_stat.py'
Jan 05 14:40:52 compute-0 sudo[232486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:52 compute-0 python3.9[232488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:40:52 compute-0 sudo[232486]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:53 compute-0 sudo[232579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjunjtyalzivkjvakrhdomnijcbeucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624052.0193112-972-24926242913775/AnsiballZ_file.py'
Jan 05 14:40:53 compute-0 sudo[232579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:53 compute-0 podman[232538]: 2026-01-05 14:40:53.088159148 +0000 UTC m=+0.127393868 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 14:40:53 compute-0 python3.9[232583]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.b135hst5 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:53 compute-0 sudo[232579]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:53 compute-0 podman[232608]: 2026-01-05 14:40:53.660824878 +0000 UTC m=+0.137985525 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:40:54 compute-0 sudo[232757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpvthwaaitvtjaciwhqrgwzrtojnqapd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624053.6297731-984-16876422244091/AnsiballZ_stat.py'
Jan 05 14:40:54 compute-0 sudo[232757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:54 compute-0 python3.9[232759]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:40:54 compute-0 sudo[232757]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:54 compute-0 sudo[232835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyvzsndbrcrvxcoivdrdrtswnypkoecn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624053.6297731-984-16876422244091/AnsiballZ_file.py'
Jan 05 14:40:54 compute-0 sudo[232835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:54 compute-0 python3.9[232837]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:54 compute-0 sudo[232835]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:55 compute-0 sudo[232987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmhrjjloqbjrkzvibdonixramoeykjnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624055.2254875-997-112500438138557/AnsiballZ_command.py'
Jan 05 14:40:55 compute-0 sudo[232987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:55 compute-0 python3.9[232989]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:40:55 compute-0 sudo[232987]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:56 compute-0 sudo[233140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqymyirflpyhmxmtauzijruuidiburef ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624056.2387223-1005-82932236745758/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 14:40:56 compute-0 sudo[233140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:57 compute-0 python3[233142]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 14:40:57 compute-0 sudo[233140]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:58 compute-0 sudo[233292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygwjpitftofsfdrcqddrztuswydamize ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624057.511557-1013-246658453826733/AnsiballZ_stat.py'
Jan 05 14:40:58 compute-0 sudo[233292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:58 compute-0 python3.9[233294]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:40:58 compute-0 sudo[233292]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:58 compute-0 sudo[233370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfblguzmaiyjzhrpglbkmgxtfrzzbtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624057.511557-1013-246658453826733/AnsiballZ_file.py'
Jan 05 14:40:58 compute-0 sudo[233370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:58 compute-0 python3.9[233372]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:40:58 compute-0 sudo[233370]: pam_unix(sudo:session): session closed for user root
Jan 05 14:40:59 compute-0 podman[201880]: time="2026-01-05T14:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:40:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 05 14:40:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 05 14:40:59 compute-0 sudo[233522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqiiaqknacmuahxnpwhyiwwsgyponev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624059.2545033-1025-132924133565198/AnsiballZ_stat.py'
Jan 05 14:40:59 compute-0 sudo[233522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:40:59 compute-0 python3.9[233524]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:00 compute-0 sudo[233522]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:00 compute-0 sudo[233611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjxhvcnnzflbzhqxdyjbjqklkhdtxpng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624059.2545033-1025-132924133565198/AnsiballZ_file.py'
Jan 05 14:41:00 compute-0 sudo[233611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:00 compute-0 podman[233574]: 2026-01-05 14:41:00.454862629 +0000 UTC m=+0.101857899 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:41:00 compute-0 python3.9[233624]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:00 compute-0 sudo[233611]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:01 compute-0 openstack_network_exporter[205179]: ERROR   14:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:41:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:41:01 compute-0 openstack_network_exporter[205179]: ERROR   14:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:41:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:41:01 compute-0 sudo[233774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrhteedzgvwqbmlgiihjahpfqujxpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624060.9852421-1037-90606605934378/AnsiballZ_stat.py'
Jan 05 14:41:01 compute-0 sudo[233774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:01 compute-0 python3.9[233776]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:01 compute-0 sudo[233774]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:02 compute-0 sudo[233870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjmhbgqinyryllnxckftrujthetiqxmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624060.9852421-1037-90606605934378/AnsiballZ_file.py'
Jan 05 14:41:02 compute-0 sudo[233870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:02 compute-0 podman[233826]: 2026-01-05 14:41:02.346816475 +0000 UTC m=+0.150815379 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 14:41:02 compute-0 python3.9[233875]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:02 compute-0 sudo[233870]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:03 compute-0 sudo[234025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxasltddupguriryekxrhrjewamkssxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624062.9260507-1049-127171263416954/AnsiballZ_stat.py'
Jan 05 14:41:03 compute-0 sudo[234025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:03 compute-0 python3.9[234027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:03 compute-0 sudo[234025]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:04 compute-0 sudo[234115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isbhramvsycpjawgwnwtpdlbctjdekzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624062.9260507-1049-127171263416954/AnsiballZ_file.py'
Jan 05 14:41:04 compute-0 sudo[234115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:04 compute-0 podman[234077]: 2026-01-05 14:41:04.140568542 +0000 UTC m=+0.099067983 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vendor=Red Hat, Inc., config_id=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 14:41:04 compute-0 python3.9[234120]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:04 compute-0 sudo[234115]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:05 compute-0 sudo[234271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fparftofkqkwdmxmvfqwhzbhdvpvkxwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624064.613235-1061-210983124226468/AnsiballZ_stat.py'
Jan 05 14:41:05 compute-0 sudo[234271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:05 compute-0 python3.9[234273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:05 compute-0 sudo[234271]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:06 compute-0 sudo[234396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fimxakpicjmzxbkyxtmbfjpmnyixwdwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624064.613235-1061-210983124226468/AnsiballZ_copy.py'
Jan 05 14:41:06 compute-0 sudo[234396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:06 compute-0 python3.9[234398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767624064.613235-1061-210983124226468/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:06 compute-0 sudo[234396]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:07 compute-0 sudo[234548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zakgnrrlprilxgyvxhqtriveoypkoipq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624066.7846563-1076-139223343711324/AnsiballZ_file.py'
Jan 05 14:41:07 compute-0 sudo[234548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:07 compute-0 python3.9[234550]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:07 compute-0 sudo[234548]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:08 compute-0 sudo[234700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xleyytojkvooqvyeuadpgildubkfmazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624067.9124503-1084-173576090035736/AnsiballZ_command.py'
Jan 05 14:41:08 compute-0 sudo[234700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:08 compute-0 python3.9[234702]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:41:08 compute-0 sudo[234700]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:09 compute-0 sudo[234855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpylvkmtukqfxdjjgwfjqylvkqoftymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624069.0429962-1092-133008853441185/AnsiballZ_blockinfile.py'
Jan 05 14:41:09 compute-0 sudo[234855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:09 compute-0 python3.9[234857]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:09 compute-0 sudo[234855]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:10 compute-0 sudo[235007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nicuhnqxwzcodbnkqmzhbebspvflpbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624070.326609-1101-219154799992480/AnsiballZ_command.py'
Jan 05 14:41:10 compute-0 sudo[235007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:11 compute-0 python3.9[235009]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:41:11 compute-0 sudo[235007]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:11 compute-0 sudo[235178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbphywxeymxaqyoowxbwoxwkhmqocap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624071.4228027-1109-6124603566579/AnsiballZ_stat.py'
Jan 05 14:41:11 compute-0 podman[235134]: 2026-01-05 14:41:11.926283463 +0000 UTC m=+0.111261561 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 14:41:11 compute-0 sudo[235178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:12 compute-0 python3.9[235181]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 14:41:12 compute-0 sudo[235178]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:13 compute-0 sudo[235333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjkgipqfhguqszgszszoloesgchapww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624072.4259214-1117-273221122658103/AnsiballZ_command.py'
Jan 05 14:41:13 compute-0 sudo[235333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:13 compute-0 python3.9[235335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:41:13 compute-0 sudo[235333]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:14 compute-0 sudo[235488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhhfrkzrnztiialdqafevixjrjjaozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624073.5614545-1125-150520891096603/AnsiballZ_file.py'
Jan 05 14:41:14 compute-0 sudo[235488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:14 compute-0 python3.9[235490]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:14 compute-0 sudo[235488]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:14 compute-0 sshd-session[214035]: Connection closed by 192.168.122.30 port 46298
Jan 05 14:41:14 compute-0 sshd-session[214032]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:41:14 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 05 14:41:14 compute-0 systemd[1]: session-27.scope: Consumed 1min 57.866s CPU time.
Jan 05 14:41:14 compute-0 systemd-logind[795]: Session 27 logged out. Waiting for processes to exit.
Jan 05 14:41:14 compute-0 systemd-logind[795]: Removed session 27.
Jan 05 14:41:18 compute-0 podman[235516]: 2026-01-05 14:41:18.655039853 +0000 UTC m=+0.132295250 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, vcs-type=git, config_id=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64)
Jan 05 14:41:20 compute-0 sshd-session[235537]: Accepted publickey for zuul from 192.168.122.30 port 36288 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 14:41:20 compute-0 systemd-logind[795]: New session 28 of user zuul.
Jan 05 14:41:20 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 05 14:41:20 compute-0 sshd-session[235537]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:41:20 compute-0 podman[235539]: 2026-01-05 14:41:20.69272581 +0000 UTC m=+0.168750504 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 05 14:41:22 compute-0 python3.9[235717]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:41:23 compute-0 podman[235834]: 2026-01-05 14:41:23.644182167 +0000 UTC m=+0.118093956 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 14:41:23 compute-0 sudo[235889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooarkvtyiswiiudhgvqksaczcaeoekaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624082.809318-34-264349645144430/AnsiballZ_systemd.py'
Jan 05 14:41:23 compute-0 sudo[235889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:23 compute-0 podman[235891]: 2026-01-05 14:41:23.841584943 +0000 UTC m=+0.094615004 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:41:24 compute-0 python3.9[235892]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 05 14:41:24 compute-0 sudo[235889]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:25 compute-0 sudo[236066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqsmtlxydreikpkwbwwdzokwdoysbio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624084.4680233-42-18805797463328/AnsiballZ_setup.py'
Jan 05 14:41:25 compute-0 sudo[236066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:25 compute-0 python3.9[236068]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 14:41:26 compute-0 sudo[236066]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:26 compute-0 sudo[236150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juymlipzrdonbbfnzgkectsozbagzvdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624084.4680233-42-18805797463328/AnsiballZ_dnf.py'
Jan 05 14:41:26 compute-0 sudo[236150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:26 compute-0 python3.9[236152]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 14:41:28 compute-0 nova_compute[185474]: 2026-01-05 14:41:28.545 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:29 compute-0 sudo[236150]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:29 compute-0 nova_compute[185474]: 2026-01-05 14:41:29.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:29 compute-0 nova_compute[185474]: 2026-01-05 14:41:29.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:29 compute-0 podman[201880]: time="2026-01-05T14:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:41:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:41:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3860 "" "Go-http-client/1.1"
Jan 05 14:41:30 compute-0 sudo[236308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riidsytisexmhzdqawytazbrooaggkiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624089.57303-54-248583923977709/AnsiballZ_stat.py'
Jan 05 14:41:30 compute-0 sudo[236308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:30 compute-0 nova_compute[185474]: 2026-01-05 14:41:30.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:30 compute-0 nova_compute[185474]: 2026-01-05 14:41:30.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:30 compute-0 nova_compute[185474]: 2026-01-05 14:41:30.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:30 compute-0 python3.9[236310]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:30 compute-0 sudo[236308]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:30 compute-0 podman[236311]: 2026-01-05 14:41:30.657448893 +0000 UTC m=+0.132400894 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:41:31 compute-0 sudo[236455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocazzptgrcopnwlbdedzhehfnsxonalv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624089.57303-54-248583923977709/AnsiballZ_copy.py'
Jan 05 14:41:31 compute-0 sudo[236455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:31 compute-0 nova_compute[185474]: 2026-01-05 14:41:31.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:31 compute-0 openstack_network_exporter[205179]: ERROR   14:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:41:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:41:31 compute-0 openstack_network_exporter[205179]: ERROR   14:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:41:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:41:31 compute-0 python3.9[236457]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767624089.57303-54-248583923977709/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:31 compute-0 sudo[236455]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.428 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.429 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.430 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.431 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:41:32 compute-0 sudo[236623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxwcxwcbmhkrmssssccndowswbdrnaxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624091.8774467-69-129777234077805/AnsiballZ_file.py'
Jan 05 14:41:32 compute-0 sudo[236623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:32 compute-0 podman[236581]: 2026-01-05 14:41:32.606964891 +0000 UTC m=+0.137846950 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:41:32 compute-0 python3.9[236628]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:32 compute-0 sudo[236623]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.930 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.933 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.47613143920898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.934 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:41:32 compute-0 nova_compute[185474]: 2026-01-05 14:41:32.935 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.024 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.025 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.056 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.072 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.075 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:41:33 compute-0 nova_compute[185474]: 2026-01-05 14:41:33.076 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:41:33 compute-0 sudo[236778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shldidsioihghgfbzqbipsbgrwhhammr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624093.155347-77-175171290083664/AnsiballZ_stat.py'
Jan 05 14:41:33 compute-0 sudo[236778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:33 compute-0 python3.9[236780]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 14:41:33 compute-0 sudo[236778]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:34 compute-0 nova_compute[185474]: 2026-01-05 14:41:34.078 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:41:34 compute-0 nova_compute[185474]: 2026-01-05 14:41:34.079 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:41:34 compute-0 nova_compute[185474]: 2026-01-05 14:41:34.079 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:41:34 compute-0 nova_compute[185474]: 2026-01-05 14:41:34.100 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:41:34 compute-0 sudo[236917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hojmjcrkkwvcgtoowgqltmirxkresobx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624093.155347-77-175171290083664/AnsiballZ_copy.py'
Jan 05 14:41:34 compute-0 sudo[236917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:34 compute-0 podman[236875]: 2026-01-05 14:41:34.632899851 +0000 UTC m=+0.116061132 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Jan 05 14:41:34 compute-0 python3.9[236923]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767624093.155347-77-175171290083664/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 14:41:34 compute-0 sudo[236917]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:35 compute-0 sudo[237073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pszjkduwsbsxmldllwwkobzobhwobmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767624095.1576748-92-280894598028098/AnsiballZ_systemd.py'
Jan 05 14:41:35 compute-0 sudo[237073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:41:36 compute-0 python3.9[237075]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 14:41:36 compute-0 systemd[1]: Stopping System Logging Service...
Jan 05 14:41:36 compute-0 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 05 14:41:36 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 05 14:41:36 compute-0 systemd[1]: Stopped System Logging Service.
Jan 05 14:41:36 compute-0 systemd[1]: rsyslog.service: Consumed 5.056s CPU time, 8.1M memory peak, read 0B from disk, written 6.4M to disk.
Jan 05 14:41:36 compute-0 systemd[1]: Starting System Logging Service...
Jan 05 14:41:36 compute-0 rsyslogd[237079]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="237079" x-info="https://www.rsyslog.com"] start
Jan 05 14:41:36 compute-0 systemd[1]: Started System Logging Service.
Jan 05 14:41:36 compute-0 rsyslogd[237079]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 14:41:36 compute-0 rsyslogd[237079]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 05 14:41:36 compute-0 rsyslogd[237079]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 05 14:41:36 compute-0 rsyslogd[237079]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 05 14:41:36 compute-0 rsyslogd[237079]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 05 14:41:36 compute-0 sudo[237073]: pam_unix(sudo:session): session closed for user root
Jan 05 14:41:37 compute-0 sshd-session[235562]: Connection closed by 192.168.122.30 port 36288
Jan 05 14:41:37 compute-0 sshd-session[235537]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:41:37 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 05 14:41:37 compute-0 systemd[1]: session-28.scope: Consumed 12.870s CPU time.
Jan 05 14:41:37 compute-0 systemd-logind[795]: Session 28 logged out. Waiting for processes to exit.
Jan 05 14:41:37 compute-0 systemd-logind[795]: Removed session 28.
Jan 05 14:41:39 compute-0 sshd-session[237108]: Invalid user solv from 165.22.168.95 port 42682
Jan 05 14:41:40 compute-0 sshd-session[237108]: Connection closed by invalid user solv 165.22.168.95 port 42682 [preauth]
Jan 05 14:41:42 compute-0 podman[237110]: 2026-01-05 14:41:42.646866589 +0000 UTC m=+0.126425321 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:41:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:41:44.792 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:41:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:41:44.794 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:41:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:41:44.795 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:41:49 compute-0 podman[237130]: 2026-01-05 14:41:49.668990683 +0000 UTC m=+0.150276485 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 05 14:41:51 compute-0 podman[237150]: 2026-01-05 14:41:51.722352303 +0000 UTC m=+0.195057084 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:41:54 compute-0 podman[237176]: 2026-01-05 14:41:54.674279691 +0000 UTC m=+0.137335425 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 05 14:41:54 compute-0 podman[237175]: 2026-01-05 14:41:54.69538861 +0000 UTC m=+0.168093505 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:41:59 compute-0 podman[201880]: time="2026-01-05T14:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:41:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:41:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 05 14:42:01 compute-0 openstack_network_exporter[205179]: ERROR   14:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:42:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:42:01 compute-0 openstack_network_exporter[205179]: ERROR   14:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:42:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:42:01 compute-0 podman[237218]: 2026-01-05 14:42:01.636453673 +0000 UTC m=+0.111746311 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:42:02 compute-0 sshd-session[237241]: Accepted publickey for zuul from 38.102.83.65 port 33736 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 14:42:02 compute-0 systemd-logind[795]: New session 29 of user zuul.
Jan 05 14:42:02 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 05 14:42:02 compute-0 sshd-session[237241]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:42:02 compute-0 podman[237243]: 2026-01-05 14:42:02.926835763 +0000 UTC m=+0.137088653 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:42:04 compute-0 python3[237436]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:42:05 compute-0 podman[237490]: 2026-01-05 14:42:05.159972901 +0000 UTC m=+0.108599675 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public)
Jan 05 14:42:06 compute-0 sudo[237678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khffitxoupuggelomxnanpmgussfgvbv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624125.7884202-36975-209640781102616/AnsiballZ_command.py'
Jan 05 14:42:06 compute-0 sudo[237678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:42:06 compute-0 python3[237680]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:42:06 compute-0 sudo[237678]: pam_unix(sudo:session): session closed for user root
Jan 05 14:42:07 compute-0 sudo[237831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppyspptqqzhuvcjxhgvertwnpomyssee ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624127.1222317-36986-102169619287712/AnsiballZ_command.py'
Jan 05 14:42:07 compute-0 sudo[237831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:42:07 compute-0 python3[237833]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:42:09 compute-0 sudo[237831]: pam_unix(sudo:session): session closed for user root
Jan 05 14:42:10 compute-0 python3[237984]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 05 14:42:11 compute-0 sudo[238135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwbbviwqcogzzexbwjslzjlqcolellir ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624131.2766728-37030-73186798898370/AnsiballZ_setup.py'
Jan 05 14:42:11 compute-0 sudo[238135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:42:12 compute-0 python3[238137]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 14:42:13 compute-0 sudo[238135]: pam_unix(sudo:session): session closed for user root
Jan 05 14:42:13 compute-0 podman[238230]: 2026-01-05 14:42:13.641480722 +0000 UTC m=+0.122546009 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 14:42:14 compute-0 sudo[238379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdgnswhsjttxvaaqahuubdhgduqmjuxq ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624134.120661-37059-143106112266486/AnsiballZ_command.py'
Jan 05 14:42:14 compute-0 sudo[238379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:42:14 compute-0 python3[238381]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:42:15 compute-0 sudo[238379]: pam_unix(sudo:session): session closed for user root
Jan 05 14:42:16 compute-0 sudo[238543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nznvapbouvtbpvocwocckqwmlcankcdp ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767624135.5390716-37076-121913849982591/AnsiballZ_command.py'
Jan 05 14:42:16 compute-0 sudo[238543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:42:16 compute-0 python3[238545]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:42:16 compute-0 sudo[238543]: pam_unix(sudo:session): session closed for user root
Jan 05 14:42:19 compute-0 sshd-session[238585]: Received disconnect from 193.46.255.103 port 62392:11:  [preauth]
Jan 05 14:42:19 compute-0 sshd-session[238585]: Disconnected from authenticating user root 193.46.255.103 port 62392 [preauth]
Jan 05 14:42:20 compute-0 podman[238588]: 2026-01-05 14:42:20.619454136 +0000 UTC m=+0.100929711 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 14:42:22 compute-0 podman[238609]: 2026-01-05 14:42:22.66433818 +0000 UTC m=+0.154844950 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:42:25 compute-0 podman[238635]: 2026-01-05 14:42:25.843783507 +0000 UTC m=+0.096893203 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:42:25 compute-0 podman[238636]: 2026-01-05 14:42:25.864994875 +0000 UTC m=+0.118060419 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 14:42:29 compute-0 nova_compute[185474]: 2026-01-05 14:42:29.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:29 compute-0 podman[201880]: time="2026-01-05T14:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:42:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:42:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3860 "" "Go-http-client/1.1"
Jan 05 14:42:30 compute-0 nova_compute[185474]: 2026-01-05 14:42:30.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:30 compute-0 nova_compute[185474]: 2026-01-05 14:42:30.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:31 compute-0 nova_compute[185474]: 2026-01-05 14:42:31.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:31 compute-0 nova_compute[185474]: 2026-01-05 14:42:31.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:31 compute-0 openstack_network_exporter[205179]: ERROR   14:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:42:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:42:31 compute-0 openstack_network_exporter[205179]: ERROR   14:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:42:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.432 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.433 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.433 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:42:32 compute-0 nova_compute[185474]: 2026-01-05 14:42:32.434 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:42:32 compute-0 podman[238677]: 2026-01-05 14:42:32.686980776 +0000 UTC m=+0.165999176 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.012 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.014 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5714MB free_disk=72.4800910949707GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.015 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.016 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.083 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.084 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.107 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.121 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.122 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:42:33 compute-0 nova_compute[185474]: 2026-01-05 14:42:33.123 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:42:33 compute-0 podman[238701]: 2026-01-05 14:42:33.638635561 +0000 UTC m=+0.120658465 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 05 14:42:35 compute-0 podman[238720]: 2026-01-05 14:42:35.678374616 +0000 UTC m=+0.151982027 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, config_id=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Jan 05 14:42:36 compute-0 nova_compute[185474]: 2026-01-05 14:42:36.124 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:42:36 compute-0 nova_compute[185474]: 2026-01-05 14:42:36.125 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:42:36 compute-0 nova_compute[185474]: 2026-01-05 14:42:36.125 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:42:36 compute-0 nova_compute[185474]: 2026-01-05 14:42:36.147 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.747 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.748 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.748 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.752 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.754 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:42:37.763 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:42:44 compute-0 podman[238743]: 2026-01-05 14:42:44.680481939 +0000 UTC m=+0.151774201 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 05 14:42:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:42:44.793 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:42:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:42:44.795 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:42:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:42:44.795 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:42:51 compute-0 podman[238763]: 2026-01-05 14:42:51.636897126 +0000 UTC m=+0.114195697 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 14:42:53 compute-0 podman[238784]: 2026-01-05 14:42:53.643988859 +0000 UTC m=+0.130308086 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 05 14:42:56 compute-0 podman[238808]: 2026-01-05 14:42:56.622946777 +0000 UTC m=+0.106811842 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:42:56 compute-0 podman[238809]: 2026-01-05 14:42:56.637043187 +0000 UTC m=+0.104134903 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 05 14:42:59 compute-0 podman[201880]: time="2026-01-05T14:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:42:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:42:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3864 "" "Go-http-client/1.1"
Jan 05 14:43:01 compute-0 openstack_network_exporter[205179]: ERROR   14:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:43:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:43:01 compute-0 openstack_network_exporter[205179]: ERROR   14:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:43:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:43:03 compute-0 podman[238850]: 2026-01-05 14:43:03.621854981 +0000 UTC m=+0.100344194 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:43:04 compute-0 podman[238871]: 2026-01-05 14:43:04.675075935 +0000 UTC m=+0.151314658 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 14:43:06 compute-0 podman[238892]: 2026-01-05 14:43:06.593837874 +0000 UTC m=+0.086075027 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, vcs-type=git, distribution-scope=public, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=kepler, release=1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 14:43:14 compute-0 podman[238913]: 2026-01-05 14:43:14.861595406 +0000 UTC m=+0.111121768 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 14:43:16 compute-0 sshd-session[237253]: Received disconnect from 38.102.83.65 port 33736:11: disconnected by user
Jan 05 14:43:16 compute-0 sshd-session[237253]: Disconnected from user zuul 38.102.83.65 port 33736
Jan 05 14:43:16 compute-0 sshd-session[237241]: pam_unix(sshd:session): session closed for user zuul
Jan 05 14:43:16 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 05 14:43:16 compute-0 systemd[1]: session-29.scope: Consumed 11.456s CPU time.
Jan 05 14:43:16 compute-0 systemd-logind[795]: Session 29 logged out. Waiting for processes to exit.
Jan 05 14:43:16 compute-0 systemd-logind[795]: Removed session 29.
Jan 05 14:43:22 compute-0 podman[238936]: 2026-01-05 14:43:22.685477367 +0000 UTC m=+0.165415573 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:43:24 compute-0 podman[238957]: 2026-01-05 14:43:24.696933821 +0000 UTC m=+0.174672918 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 05 14:43:27 compute-0 podman[238984]: 2026-01-05 14:43:27.618278045 +0000 UTC m=+0.096209035 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:43:27 compute-0 podman[238983]: 2026-01-05 14:43:27.6430685 +0000 UTC m=+0.115001571 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:43:29 compute-0 podman[201880]: time="2026-01-05T14:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:43:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:43:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3862 "" "Go-http-client/1.1"
Jan 05 14:43:30 compute-0 nova_compute[185474]: 2026-01-05 14:43:30.417 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:31 compute-0 nova_compute[185474]: 2026-01-05 14:43:31.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:31 compute-0 nova_compute[185474]: 2026-01-05 14:43:31.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:31 compute-0 nova_compute[185474]: 2026-01-05 14:43:31.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:31 compute-0 openstack_network_exporter[205179]: ERROR   14:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:43:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:43:31 compute-0 openstack_network_exporter[205179]: ERROR   14:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:43:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:43:32 compute-0 nova_compute[185474]: 2026-01-05 14:43:32.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:32 compute-0 nova_compute[185474]: 2026-01-05 14:43:32.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:32 compute-0 nova_compute[185474]: 2026-01-05 14:43:32.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.447 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.448 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.448 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.449 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.869 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.871 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.47988891601562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.871 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.871 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.991 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:43:33 compute-0 nova_compute[185474]: 2026-01-05 14:43:33.992 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:43:34 compute-0 nova_compute[185474]: 2026-01-05 14:43:34.021 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:43:34 compute-0 nova_compute[185474]: 2026-01-05 14:43:34.038 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:43:34 compute-0 nova_compute[185474]: 2026-01-05 14:43:34.041 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:43:34 compute-0 nova_compute[185474]: 2026-01-05 14:43:34.041 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:43:34 compute-0 podman[239024]: 2026-01-05 14:43:34.628764547 +0000 UTC m=+0.105864499 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:43:35 compute-0 nova_compute[185474]: 2026-01-05 14:43:35.042 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:35 compute-0 podman[239046]: 2026-01-05 14:43:35.629772595 +0000 UTC m=+0.113149132 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 05 14:43:36 compute-0 nova_compute[185474]: 2026-01-05 14:43:36.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:43:36 compute-0 nova_compute[185474]: 2026-01-05 14:43:36.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:43:36 compute-0 nova_compute[185474]: 2026-01-05 14:43:36.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:43:36 compute-0 nova_compute[185474]: 2026-01-05 14:43:36.593 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:43:37 compute-0 podman[239065]: 2026-01-05 14:43:37.650995448 +0000 UTC m=+0.133125260 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, container_name=kepler, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 05 14:43:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:43:44.795 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:43:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:43:44.796 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:43:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:43:44.796 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:43:45 compute-0 podman[239084]: 2026-01-05 14:43:45.639730818 +0000 UTC m=+0.115344740 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:43:53 compute-0 podman[239105]: 2026-01-05 14:43:53.660824442 +0000 UTC m=+0.137231558 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=)
Jan 05 14:43:55 compute-0 podman[239128]: 2026-01-05 14:43:55.677003121 +0000 UTC m=+0.153876548 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 05 14:43:58 compute-0 podman[239153]: 2026-01-05 14:43:58.630179717 +0000 UTC m=+0.098640479 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:43:58 compute-0 podman[239154]: 2026-01-05 14:43:58.633785612 +0000 UTC m=+0.095964958 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 14:43:59 compute-0 podman[201880]: time="2026-01-05T14:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:43:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:43:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3868 "" "Go-http-client/1.1"
Jan 05 14:44:01 compute-0 openstack_network_exporter[205179]: ERROR   14:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:44:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:44:01 compute-0 openstack_network_exporter[205179]: ERROR   14:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:44:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:44:05 compute-0 podman[239190]: 2026-01-05 14:44:05.620841145 +0000 UTC m=+0.098272618 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:44:06 compute-0 podman[239213]: 2026-01-05 14:44:06.680612196 +0000 UTC m=+0.160849253 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:44:08 compute-0 podman[239230]: 2026-01-05 14:44:08.646494685 +0000 UTC m=+0.136842347 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, config_id=kepler, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:44:16 compute-0 podman[239249]: 2026-01-05 14:44:16.673099374 +0000 UTC m=+0.150462410 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 05 14:44:22 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:22.664 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:44:22 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:22.666 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:44:22 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:22.668 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:44:24 compute-0 podman[239269]: 2026-01-05 14:44:24.636175614 +0000 UTC m=+0.114203602 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Jan 05 14:44:26 compute-0 podman[239290]: 2026-01-05 14:44:26.667454496 +0000 UTC m=+0.147777549 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 05 14:44:29 compute-0 podman[239316]: 2026-01-05 14:44:29.626594265 +0000 UTC m=+0.115306372 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:44:29 compute-0 podman[239317]: 2026-01-05 14:44:29.654463389 +0000 UTC m=+0.127738633 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 14:44:29 compute-0 podman[201880]: time="2026-01-05T14:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:44:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:44:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3863 "" "Go-http-client/1.1"
Jan 05 14:44:31 compute-0 nova_compute[185474]: 2026-01-05 14:44:31.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:31 compute-0 nova_compute[185474]: 2026-01-05 14:44:31.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:31 compute-0 openstack_network_exporter[205179]: ERROR   14:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:44:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:44:31 compute-0 openstack_network_exporter[205179]: ERROR   14:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:44:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:44:31 compute-0 sshd-session[239358]: Received disconnect from 193.46.255.244 port 17993:11:  [preauth]
Jan 05 14:44:31 compute-0 sshd-session[239358]: Disconnected from authenticating user root 193.46.255.244 port 17993 [preauth]
Jan 05 14:44:32 compute-0 nova_compute[185474]: 2026-01-05 14:44:32.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:32 compute-0 nova_compute[185474]: 2026-01-05 14:44:32.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.436 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.436 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.437 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:44:33 compute-0 nova_compute[185474]: 2026-01-05 14:44:33.437 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.005 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.007 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5716MB free_disk=72.47990798950195GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.008 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.008 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.100 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.101 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.139 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.157 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.159 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:44:34 compute-0 nova_compute[185474]: 2026-01-05 14:44:34.160 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:44:35 compute-0 nova_compute[185474]: 2026-01-05 14:44:35.160 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:36 compute-0 nova_compute[185474]: 2026-01-05 14:44:36.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:36 compute-0 podman[239360]: 2026-01-05 14:44:36.65780381 +0000 UTC m=+0.134182765 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:44:37 compute-0 nova_compute[185474]: 2026-01-05 14:44:37.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:44:37 compute-0 nova_compute[185474]: 2026-01-05 14:44:37.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:44:37 compute-0 nova_compute[185474]: 2026-01-05 14:44:37.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:44:37 compute-0 nova_compute[185474]: 2026-01-05 14:44:37.420 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 14:44:37 compute-0 podman[239384]: 2026-01-05 14:44:37.646596344 +0000 UTC m=+0.126567402 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.748 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.749 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.749 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.753 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.755 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.757 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.758 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.759 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.763 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.769 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.769 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.771 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.771 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.776 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.777 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:44:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:44:37 compute-0 sshd-session[239404]: Invalid user solv from 165.22.168.95 port 44056
Jan 05 14:44:38 compute-0 sshd-session[239404]: Connection closed by invalid user solv 165.22.168.95 port 44056 [preauth]
Jan 05 14:44:39 compute-0 podman[239407]: 2026-01-05 14:44:39.62450628 +0000 UTC m=+0.100830744 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Jan 05 14:44:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:44.797 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:44:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:44.797 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:44:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:44:44.797 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:44:47 compute-0 podman[239426]: 2026-01-05 14:44:47.657259523 +0000 UTC m=+0.132363187 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224)
Jan 05 14:44:55 compute-0 podman[239446]: 2026-01-05 14:44:55.644315243 +0000 UTC m=+0.123503079 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350)
Jan 05 14:44:57 compute-0 podman[239466]: 2026-01-05 14:44:57.674884786 +0000 UTC m=+0.156915782 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:44:59 compute-0 podman[201880]: time="2026-01-05T14:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:44:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:44:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3869 "" "Go-http-client/1.1"
Jan 05 14:45:00 compute-0 podman[239492]: 2026-01-05 14:45:00.655715443 +0000 UTC m=+0.127726694 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 05 14:45:00 compute-0 podman[239491]: 2026-01-05 14:45:00.658500717 +0000 UTC m=+0.141574133 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 14:45:01 compute-0 openstack_network_exporter[205179]: ERROR   14:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:45:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:45:01 compute-0 openstack_network_exporter[205179]: ERROR   14:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:45:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:45:07 compute-0 podman[239533]: 2026-01-05 14:45:07.640709444 +0000 UTC m=+0.104437861 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:45:08 compute-0 podman[239557]: 2026-01-05 14:45:08.650714814 +0000 UTC m=+0.122610946 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 14:45:09 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:09.464 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:45:09 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:09.465 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:45:10 compute-0 podman[239575]: 2026-01-05 14:45:10.653519736 +0000 UTC m=+0.131220117 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.expose-services=)
Jan 05 14:45:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:14.468 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:18 compute-0 podman[239593]: 2026-01-05 14:45:18.636432554 +0000 UTC m=+0.111599531 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.511 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.511 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.530 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.632 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.633 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.643 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.643 185478 INFO nova.compute.claims [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Claim successful on node compute-0.ctlplane.example.com
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.746 185478 DEBUG nova.compute.provider_tree [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.762 185478 DEBUG nova.scheduler.client.report [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.786 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.786 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.841 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.842 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.869 185478 INFO nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 14:45:21 compute-0 nova_compute[185474]: 2026-01-05 14:45:21.907 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.008 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.010 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.011 185478 INFO nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Creating image(s)
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.012 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.012 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.013 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.014 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.016 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.550 185478 WARNING oslo_policy.policy [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 05 14:45:22 compute-0 nova_compute[185474]: 2026-01-05 14:45:22.550 185478 WARNING oslo_policy.policy [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.260 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Successfully created port: c6393a71-e622-49d1-97df-e208cd2c8f06 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.344 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.449 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.451 185478 DEBUG nova.virt.images [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] 22e54d95-dd91-4f66-a65f-ce9984e648dc was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.453 185478 DEBUG nova.privsep.utils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.454 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.part /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.694 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.part /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.converted" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.702 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.759 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4.converted --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.761 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:23 compute-0 nova_compute[185474]: 2026-01-05 14:45:23.788 185478 INFO oslo.privsep.daemon [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpzt3dj9h3/privsep.sock']
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.454 185478 INFO oslo.privsep.daemon [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Spawned new privsep daemon via rootwrap
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.360 239631 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.367 239631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.371 239631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.371 239631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239631
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.568 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.626 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.628 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.630 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.653 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.733 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.735 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.805 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk 1073741824" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.807 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.808 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.890 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.893 185478 DEBUG nova.virt.disk.api [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking if we can resize image /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.894 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.995 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.997 185478 DEBUG nova.virt.disk.api [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Cannot resize image /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 14:45:24 compute-0 nova_compute[185474]: 2026-01-05 14:45:24.999 185478 DEBUG nova.objects.instance [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'migration_context' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.883 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.884 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.885 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.886 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.888 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.889 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.932 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.934 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.993 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:25 compute-0 nova_compute[185474]: 2026-01-05 14:45:25.995 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.023 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.119 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.122 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.124 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.148 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.238 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.241 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.416 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 1073741824" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.417 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.418 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.477 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Successfully updated port: c6393a71-e622-49d1-97df-e208cd2c8f06 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.489 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.491 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.491 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Ensure instance console log exists: /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.492 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.493 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.493 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.502 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.503 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.504 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 14:45:26 compute-0 podman[239664]: 2026-01-05 14:45:26.623033242 +0000 UTC m=+0.109119876 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 14:45:26 compute-0 nova_compute[185474]: 2026-01-05 14:45:26.716 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 14:45:27 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.101 185478 DEBUG nova.compute.manager [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-changed-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:45:27 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.102 185478 DEBUG nova.compute.manager [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Refreshing instance network info cache due to event network-changed-c6393a71-e622-49d1-97df-e208cd2c8f06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:45:27 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.103 185478 DEBUG oslo_concurrency.lockutils [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:45:27 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.963 185478 DEBUG nova.network.neutron [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:45:27 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.999 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:27.999 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Instance network_info: |[{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.000 185478 DEBUG oslo_concurrency.lockutils [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.000 185478 DEBUG nova.network.neutron [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Refreshing network info cache for port c6393a71-e622-49d1-97df-e208cd2c8f06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.004 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Start _get_guest_xml network_info=[{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.011 185478 WARNING nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.017 185478 DEBUG nova.virt.libvirt.host [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.018 185478 DEBUG nova.virt.libvirt.host [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.023 185478 DEBUG nova.virt.libvirt.host [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.023 185478 DEBUG nova.virt.libvirt.host [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.023 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.024 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T14:44:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='afe04c80-f0ab-417e-844c-b5b05cc96b17',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.024 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.024 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.024 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.025 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.025 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.025 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.025 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.025 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.026 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.026 185478 DEBUG nova.virt.hardware [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.029 185478 DEBUG nova.privsep.utils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.030 185478 DEBUG nova.virt.libvirt.vif [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:45:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-6qqwyv3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:45:21Z,user_data=None,user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=731f6e65-e951-4af3-aaf3-0322c02b154c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.030 185478 DEBUG nova.network.os_vif_util [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.031 185478 DEBUG nova.network.os_vif_util [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.032 185478 DEBUG nova.objects.instance [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'pci_devices' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.061 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] End _get_guest_xml xml=<domain type="kvm">
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <uuid>731f6e65-e951-4af3-aaf3-0322c02b154c</uuid>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <name>instance-00000001</name>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <memory>524288</memory>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <metadata>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:name>test_0</nova:name>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 14:45:28</nova:creationTime>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:flavor name="m1.small">
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:memory>512</nova:memory>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:user uuid="4c0cf318026a40748762c9e05cd1efe0">admin</nova:user>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:project uuid="54417029b2fb4b749e20754214013802">admin</nova:project>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="22e54d95-dd91-4f66-a65f-ce9984e648dc"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         <nova:port uuid="c6393a71-e622-49d1-97df-e208cd2c8f06">
Jan 05 14:45:28 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="192.168.0.178" ipVersion="4"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </metadata>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <system>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="serial">731f6e65-e951-4af3-aaf3-0322c02b154c</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="uuid">731f6e65-e951-4af3-aaf3-0322c02b154c</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </system>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <os>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </os>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <features>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <apic/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </features>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </clock>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <target dev="vdb" bus="virtio"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.config"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:f3:7f:70"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <target dev="tapc6393a71-e6"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/console.log" append="off"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </serial>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <video>
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </video>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 14:45:28 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 14:45:28 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 14:45:28 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:45:28 compute-0 nova_compute[185474]: </domain>
Jan 05 14:45:28 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.062 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Preparing to wait for external event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.062 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.063 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.063 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.064 185478 DEBUG nova.virt.libvirt.vif [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:45:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-6qqwyv3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:45:21Z,user_data=None,user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=731f6e65-e951-4af3-aaf3-0322c02b154c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.064 185478 DEBUG nova.network.os_vif_util [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.065 185478 DEBUG nova.network.os_vif_util [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.066 185478 DEBUG os_vif [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.116 185478 DEBUG ovsdbapp.backend.ovs_idl [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.117 185478 DEBUG ovsdbapp.backend.ovs_idl [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.117 185478 DEBUG ovsdbapp.backend.ovs_idl [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.118 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.119 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.120 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.122 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.125 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.129 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.148 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.149 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.150 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.151 185478 INFO oslo.privsep.daemon [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpv_ti661a/privsep.sock']
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.422 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:28 compute-0 podman[239688]: 2026-01-05 14:45:28.697746305 +0000 UTC m=+0.179057835 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.860 185478 INFO oslo.privsep.daemon [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Spawned new privsep daemon via rootwrap
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.724 239713 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.731 239713 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.735 239713 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 05 14:45:28 compute-0 nova_compute[185474]: 2026-01-05 14:45:28.736 239713 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239713
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.212 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.213 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6393a71-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.214 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6393a71-e6, col_values=(('external_ids', {'iface-id': 'c6393a71-e622-49d1-97df-e208cd2c8f06', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:7f:70', 'vm-uuid': '731f6e65-e951-4af3-aaf3-0322c02b154c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.217 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:29 compute-0 NetworkManager[56139]: <info>  [1767624329.2186] manager: (tapc6393a71-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.221 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.227 185478 DEBUG nova.network.neutron [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated VIF entry in instance network info cache for port c6393a71-e622-49d1-97df-e208cd2c8f06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.228 185478 DEBUG nova.network.neutron [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.233 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.234 185478 INFO os_vif [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6')
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.246 185478 DEBUG oslo_concurrency.lockutils [req-7ccdfb85-bfd7-483e-8f47-dfaabbfbcf1d req-0efcca73-a3d8-4290-ac69-10de6ba3d1b8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.303 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.304 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.304 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.304 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No VIF found with MAC fa:16:3e:f3:7f:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 14:45:29 compute-0 nova_compute[185474]: 2026-01-05 14:45:29.305 185478 INFO nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Using config drive
Jan 05 14:45:29 compute-0 podman[201880]: time="2026-01-05T14:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:45:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 14:45:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3879 "" "Go-http-client/1.1"
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.270 185478 INFO nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Creating config drive at /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.config
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.277 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8340hi49 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:31 compute-0 openstack_network_exporter[205179]: ERROR   14:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:45:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:45:31 compute-0 openstack_network_exporter[205179]: ERROR   14:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:45:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.418 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.424 185478 DEBUG oslo_concurrency.processutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8340hi49" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:31 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 05 14:45:31 compute-0 kernel: tapc6393a71-e6: entered promiscuous mode
Jan 05 14:45:31 compute-0 NetworkManager[56139]: <info>  [1767624331.5937] manager: (tapc6393a71-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 05 14:45:31 compute-0 ovn_controller[97763]: 2026-01-05T14:45:31Z|00027|binding|INFO|Claiming lport c6393a71-e622-49d1-97df-e208cd2c8f06 for this chassis.
Jan 05 14:45:31 compute-0 ovn_controller[97763]: 2026-01-05T14:45:31Z|00028|binding|INFO|c6393a71-e622-49d1-97df-e208cd2c8f06: Claiming fa:16:3e:f3:7f:70 192.168.0.178
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.599 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.605 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:31 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:31.629 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:7f:70 192.168.0.178'], port_security=['fa:16:3e:f3:7f:70 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=c6393a71-e622-49d1-97df-e208cd2c8f06) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:45:31 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:31.631 107222 INFO neutron.agent.ovn.metadata.agent [-] Port c6393a71-e622-49d1-97df-e208cd2c8f06 in datapath 905a1599-2980-4b24-9705-76e3c8a469ea bound to our chassis
Jan 05 14:45:31 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:31.633 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:45:31 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:31.635 107222 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpmposspb3/privsep.sock']
Jan 05 14:45:31 compute-0 systemd-udevd[239778]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:45:31 compute-0 podman[239728]: 2026-01-05 14:45:31.646338912 +0000 UTC m=+0.136934839 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:45:31 compute-0 NetworkManager[56139]: <info>  [1767624331.6610] device (tapc6393a71-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:45:31 compute-0 NetworkManager[56139]: <info>  [1767624331.6654] device (tapc6393a71-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 14:45:31 compute-0 systemd-machined[156786]: New machine qemu-1-instance-00000001.
Jan 05 14:45:31 compute-0 podman[239729]: 2026-01-05 14:45:31.683512815 +0000 UTC m=+0.157563540 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.701 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:31 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 05 14:45:31 compute-0 ovn_controller[97763]: 2026-01-05T14:45:31Z|00029|binding|INFO|Setting lport c6393a71-e622-49d1-97df-e208cd2c8f06 ovn-installed in OVS
Jan 05 14:45:31 compute-0 ovn_controller[97763]: 2026-01-05T14:45:31Z|00030|binding|INFO|Setting lport c6393a71-e622-49d1-97df-e208cd2c8f06 up in Southbound
Jan 05 14:45:31 compute-0 nova_compute[185474]: 2026-01-05 14:45:31.712 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.124 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624332.1233819, 731f6e65-e951-4af3-aaf3-0322c02b154c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.126 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] VM Started (Lifecycle Event)
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.187 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.196 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624332.1235774, 731f6e65-e951-4af3-aaf3-0322c02b154c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.197 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] VM Paused (Lifecycle Event)
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.229 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.237 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.264 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.312 107222 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.313 107222 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpmposspb3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.178 239805 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.186 239805 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.196 239805 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.196 239805 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239805
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.319 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[6008312f-728c-494d-8eed-8df6b08b3bf8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.719 185478 DEBUG nova.compute.manager [req-c0e5843f-9026-46eb-bab0-edee2c3c0ed4 req-efb76ef5-7325-4859-ac1f-34a94f75d372 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.721 185478 DEBUG oslo_concurrency.lockutils [req-c0e5843f-9026-46eb-bab0-edee2c3c0ed4 req-efb76ef5-7325-4859-ac1f-34a94f75d372 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.722 185478 DEBUG oslo_concurrency.lockutils [req-c0e5843f-9026-46eb-bab0-edee2c3c0ed4 req-efb76ef5-7325-4859-ac1f-34a94f75d372 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.723 185478 DEBUG oslo_concurrency.lockutils [req-c0e5843f-9026-46eb-bab0-edee2c3c0ed4 req-efb76ef5-7325-4859-ac1f-34a94f75d372 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.724 185478 DEBUG nova.compute.manager [req-c0e5843f-9026-46eb-bab0-edee2c3c0ed4 req-efb76ef5-7325-4859-ac1f-34a94f75d372 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Processing event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.726 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.733 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624332.733496, 731f6e65-e951-4af3-aaf3-0322c02b154c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.735 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] VM Resumed (Lifecycle Event)
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.745 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.755 185478 INFO nova.virt.libvirt.driver [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Instance spawned successfully.
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.756 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.764 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.773 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.775 239805 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.775 239805 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:32 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:32.775 239805 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.792 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.793 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.794 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.796 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.798 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.799 185478 DEBUG nova.virt.libvirt.driver [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.808 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.853 185478 INFO nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Took 10.84 seconds to spawn the instance on the hypervisor.
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.855 185478 DEBUG nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.939 185478 INFO nova.compute.manager [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Took 11.34 seconds to build instance.
Jan 05 14:45:32 compute-0 nova_compute[185474]: 2026-01-05 14:45:32.965 185478 DEBUG oslo_concurrency.lockutils [None req-ba07b846-11e8-47c0-b240-51080c7c25ab 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.288 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[351cb2b7-db1d-4ce7-bf53-ec798c129d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.289 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap905a1599-21 in ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.292 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap905a1599-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.293 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3b01b8ee-0b84-490d-abac-97ba80e74252]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.296 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[7041543d-4d2d-4ddd-85a1-8b59ea6ad2b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.331 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[1190e8b4-2d46-42e1-9fe7-a1e4429c47e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.364 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[73a86001-f906-4cfb-ac8d-206d8de4b6c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:33.367 107222 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp31q_8elf/privsep.sock']
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.401 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.403 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.426 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.435 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.436 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.437 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.438 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.539 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.626 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.627 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.717 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.718 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.815 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.816 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:45:33 compute-0 nova_compute[185474]: 2026-01-05 14:45:33.892 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:45:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 14:45:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.150 107222 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.151 107222 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp31q_8elf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.020 239851 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.027 239851 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.031 239851 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.031 239851 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239851
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.156 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2be6e3-7e91-4789-8a58-aeed6e72d5b2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.217 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.363 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.364 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5489MB free_disk=72.4465446472168GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.365 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.365 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.571 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.571 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.571 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.647 239851 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.647 239851 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:34 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:34.647 239851 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.665 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.761 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.762 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.787 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.805 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.815 185478 DEBUG nova.compute.manager [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.815 185478 DEBUG oslo_concurrency.lockutils [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.815 185478 DEBUG oslo_concurrency.lockutils [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.816 185478 DEBUG oslo_concurrency.lockutils [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.816 185478 DEBUG nova.compute.manager [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] No waiting events found dispatching network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.816 185478 WARNING nova.compute.manager [req-95eb68e5-6800-442a-85b3-078b490c6a75 req-0d874357-641b-4d34-92d8-742d0956424f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received unexpected event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 for instance with vm_state active and task_state None.
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.855 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.912 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updated inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.913 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.913 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.940 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.940 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.941 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:34 compute-0 nova_compute[185474]: 2026-01-05 14:45:34.941 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.183 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a68d8b-25d2-4e66-b80f-3751de06f649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 NetworkManager[56139]: <info>  [1767624335.2350] manager: (tap905a1599-20): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.234 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f523c456-b062-4dc5-af84-5f675d09e0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.283 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[6977f385-ddc1-4276-8dfd-69b387418034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 systemd-udevd[239863]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.289 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2045d5-021a-4165-8186-1534fb287d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 NetworkManager[56139]: <info>  [1767624335.3293] device (tap905a1599-20): carrier: link connected
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.336 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[e63f8763-6e43-4c03-9346-508318271fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.367 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c807dc5e-f339-4770-8d95-d7edef4925d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 42662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239881, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.390 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[070b5806-98af-44e7-be68-1867ef6c5c04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:e4dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366227, 'tstamp': 366227}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239882, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.415 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f04742ae-0f73-4d5c-bfa8-38bfc5042541]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 42662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239883, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.472 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc54b85-77b0-4202-8ed1-4dca3ece112b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.573 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[02c0c91a-87aa-44a6-82a7-8dfe8cace480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.576 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.577 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.578 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:35 compute-0 kernel: tap905a1599-20: entered promiscuous mode
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.585 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:35 compute-0 NetworkManager[56139]: <info>  [1767624335.5887] manager: (tap905a1599-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 05 14:45:35 compute-0 ovn_controller[97763]: 2026-01-05T14:45:35Z|00031|binding|INFO|Releasing lport add49293-6ad0-4684-b3cd-091b92792de4 from this chassis (sb_readonly=0)
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.592 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/905a1599-2980-4b24-9705-76e3c8a469ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/905a1599-2980-4b24-9705-76e3c8a469ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.593 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d26d21-c8eb-4c0c-be62-169de64fbe37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.596 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: global
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/905a1599-2980-4b24-9705-76e3c8a469ea.pid.haproxy
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 14:45:35 compute-0 nova_compute[185474]: 2026-01-05 14:45:35.594 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:35 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:35.602 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'env', 'PROCESS_TAG=haproxy-905a1599-2980-4b24-9705-76e3c8a469ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/905a1599-2980-4b24-9705-76e3c8a469ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 14:45:35 compute-0 nova_compute[185474]: 2026-01-05 14:45:35.605 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:36 compute-0 podman[239915]: 2026-01-05 14:45:36.145951741 +0000 UTC m=+0.106074195 container create f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 14:45:36 compute-0 podman[239915]: 2026-01-05 14:45:36.088073975 +0000 UTC m=+0.048196459 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 14:45:36 compute-0 systemd[1]: Started libpod-conmon-f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c.scope.
Jan 05 14:45:36 compute-0 systemd[1]: Started libcrun container.
Jan 05 14:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/114e2ae5f10836ef271e2f8657dd1cf97aaf34ae3ba202a3294a00f2eaad14ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 14:45:36 compute-0 podman[239915]: 2026-01-05 14:45:36.344055582 +0000 UTC m=+0.304178126 container init f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 05 14:45:36 compute-0 podman[239915]: 2026-01-05 14:45:36.352308693 +0000 UTC m=+0.312431147 container start f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:45:36 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [NOTICE]   (239934) : New worker (239936) forked
Jan 05 14:45:36 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [NOTICE]   (239934) : Loading success.
Jan 05 14:45:36 compute-0 nova_compute[185474]: 2026-01-05 14:45:36.418 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:36 compute-0 nova_compute[185474]: 2026-01-05 14:45:36.420 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:36 compute-0 nova_compute[185474]: 2026-01-05 14:45:36.420 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:36 compute-0 nova_compute[185474]: 2026-01-05 14:45:36.421 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 14:45:36 compute-0 nova_compute[185474]: 2026-01-05 14:45:36.438 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 14:45:37 compute-0 nova_compute[185474]: 2026-01-05 14:45:37.417 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.434 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:38 compute-0 podman[239945]: 2026-01-05 14:45:38.648968434 +0000 UTC m=+0.122886514 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.662 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.662 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.662 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:45:38 compute-0 nova_compute[185474]: 2026-01-05 14:45:38.663 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:45:38 compute-0 podman[239967]: 2026-01-05 14:45:38.831884741 +0000 UTC m=+0.130702313 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 05 14:45:39 compute-0 nova_compute[185474]: 2026-01-05 14:45:39.220 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:40 compute-0 nova_compute[185474]: 2026-01-05 14:45:40.783 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:45:40 compute-0 nova_compute[185474]: 2026-01-05 14:45:40.808 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:45:40 compute-0 nova_compute[185474]: 2026-01-05 14:45:40.809 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:45:40 compute-0 nova_compute[185474]: 2026-01-05 14:45:40.812 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:45:41 compute-0 podman[239988]: 2026-01-05 14:45:41.667590393 +0000 UTC m=+0.138514732 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Jan 05 14:45:43 compute-0 nova_compute[185474]: 2026-01-05 14:45:43.432 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:44 compute-0 nova_compute[185474]: 2026-01-05 14:45:44.224 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:44.799 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:45:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:44.800 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:45:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:44.801 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:45:48 compute-0 ovn_controller[97763]: 2026-01-05T14:45:48Z|00032|binding|INFO|Releasing lport add49293-6ad0-4684-b3cd-091b92792de4 from this chassis (sb_readonly=0)
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.158 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1657] manager: (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1671] device (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <warn>  [1767624348.1677] device (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1696] manager: (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1708] device (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <warn>  [1767624348.1709] device (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1725] manager: (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1740] manager: (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1749] device (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 14:45:48 compute-0 NetworkManager[56139]: <info>  [1767624348.1754] device (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 14:45:48 compute-0 ovn_controller[97763]: 2026-01-05T14:45:48Z|00033|binding|INFO|Releasing lport add49293-6ad0-4684-b3cd-091b92792de4 from this chassis (sb_readonly=0)
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.221 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.231 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.436 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.504 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:48 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:48.502 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:45:48 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:48.505 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.525 185478 DEBUG nova.compute.manager [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-changed-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.526 185478 DEBUG nova.compute.manager [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Refreshing instance network info cache due to event network-changed-c6393a71-e622-49d1-97df-e208cd2c8f06. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.526 185478 DEBUG oslo_concurrency.lockutils [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.527 185478 DEBUG oslo_concurrency.lockutils [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:45:48 compute-0 nova_compute[185474]: 2026-01-05 14:45:48.527 185478 DEBUG nova.network.neutron [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Refreshing network info cache for port c6393a71-e622-49d1-97df-e208cd2c8f06 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:45:49 compute-0 nova_compute[185474]: 2026-01-05 14:45:49.228 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:49 compute-0 podman[240010]: 2026-01-05 14:45:49.62983238 +0000 UTC m=+0.111345976 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 14:45:51 compute-0 nova_compute[185474]: 2026-01-05 14:45:51.187 185478 DEBUG nova.network.neutron [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated VIF entry in instance network info cache for port c6393a71-e622-49d1-97df-e208cd2c8f06. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:45:51 compute-0 nova_compute[185474]: 2026-01-05 14:45:51.188 185478 DEBUG nova.network.neutron [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:45:51 compute-0 nova_compute[185474]: 2026-01-05 14:45:51.212 185478 DEBUG oslo_concurrency.lockutils [req-cf6223dc-6c1c-4e6e-b633-eb4e122a36ea req-b631e517-60cc-4ed1-8a75-1ff27004d31d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:45:53 compute-0 nova_compute[185474]: 2026-01-05 14:45:53.441 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:53 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:45:53.508 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:45:54 compute-0 nova_compute[185474]: 2026-01-05 14:45:54.232 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:57 compute-0 podman[240030]: 2026-01-05 14:45:57.623806865 +0000 UTC m=+0.108922361 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, release=1755695350, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter)
Jan 05 14:45:58 compute-0 nova_compute[185474]: 2026-01-05 14:45:58.444 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:59 compute-0 nova_compute[185474]: 2026-01-05 14:45:59.235 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:45:59 compute-0 podman[240050]: 2026-01-05 14:45:59.677160697 +0000 UTC m=+0.163089679 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:45:59 compute-0 podman[201880]: time="2026-01-05T14:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:45:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:45:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Jan 05 14:46:01 compute-0 openstack_network_exporter[205179]: ERROR   14:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:46:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:46:01 compute-0 openstack_network_exporter[205179]: ERROR   14:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:46:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:46:02 compute-0 podman[240073]: 2026-01-05 14:46:02.651073109 +0000 UTC m=+0.125763661 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 14:46:02 compute-0 podman[240072]: 2026-01-05 14:46:02.669332047 +0000 UTC m=+0.140843124 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:46:03 compute-0 nova_compute[185474]: 2026-01-05 14:46:03.446 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:04 compute-0 nova_compute[185474]: 2026-01-05 14:46:04.236 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:06 compute-0 ovn_controller[97763]: 2026-01-05T14:46:06Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:7f:70 192.168.0.178
Jan 05 14:46:06 compute-0 ovn_controller[97763]: 2026-01-05T14:46:06Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:7f:70 192.168.0.178
Jan 05 14:46:06 compute-0 nova_compute[185474]: 2026-01-05 14:46:06.938 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:06 compute-0 nova_compute[185474]: 2026-01-05 14:46:06.962 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Triggering sync for uuid 731f6e65-e951-4af3-aaf3-0322c02b154c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 14:46:06 compute-0 nova_compute[185474]: 2026-01-05 14:46:06.963 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:06 compute-0 nova_compute[185474]: 2026-01-05 14:46:06.963 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:07 compute-0 nova_compute[185474]: 2026-01-05 14:46:07.016 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:08 compute-0 nova_compute[185474]: 2026-01-05 14:46:08.451 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:08 compute-0 podman[240130]: 2026-01-05 14:46:08.889843277 +0000 UTC m=+0.120997373 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:46:09 compute-0 podman[240153]: 2026-01-05 14:46:09.019350387 +0000 UTC m=+0.089958704 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 14:46:09 compute-0 nova_compute[185474]: 2026-01-05 14:46:09.238 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:12 compute-0 podman[240173]: 2026-01-05 14:46:12.626306381 +0000 UTC m=+0.098530303 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=kepler, managed_by=edpm_ansible, architecture=x86_64, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543)
Jan 05 14:46:13 compute-0 nova_compute[185474]: 2026-01-05 14:46:13.454 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:14 compute-0 nova_compute[185474]: 2026-01-05 14:46:14.242 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:18 compute-0 ovn_controller[97763]: 2026-01-05T14:46:18Z|00034|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Jan 05 14:46:18 compute-0 nova_compute[185474]: 2026-01-05 14:46:18.458 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:19 compute-0 nova_compute[185474]: 2026-01-05 14:46:19.246 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:20 compute-0 podman[240193]: 2026-01-05 14:46:20.645587713 +0000 UTC m=+0.120451979 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 14:46:23 compute-0 nova_compute[185474]: 2026-01-05 14:46:23.462 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:24 compute-0 nova_compute[185474]: 2026-01-05 14:46:24.248 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:28 compute-0 nova_compute[185474]: 2026-01-05 14:46:28.465 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:28 compute-0 podman[240213]: 2026-01-05 14:46:28.656664047 +0000 UTC m=+0.139218173 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:46:29 compute-0 nova_compute[185474]: 2026-01-05 14:46:29.252 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:29 compute-0 podman[201880]: time="2026-01-05T14:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:46:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:46:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Jan 05 14:46:30 compute-0 podman[240233]: 2026-01-05 14:46:30.692603897 +0000 UTC m=+0.171208833 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:46:31 compute-0 openstack_network_exporter[205179]: ERROR   14:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:46:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:46:31 compute-0 openstack_network_exporter[205179]: ERROR   14:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:46:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:46:33 compute-0 nova_compute[185474]: 2026-01-05 14:46:33.419 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:33 compute-0 nova_compute[185474]: 2026-01-05 14:46:33.468 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:33 compute-0 podman[240257]: 2026-01-05 14:46:33.63603673 +0000 UTC m=+0.114128072 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:46:33 compute-0 podman[240258]: 2026-01-05 14:46:33.668690427 +0000 UTC m=+0.135872443 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:46:34 compute-0 nova_compute[185474]: 2026-01-05 14:46:34.255 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:34 compute-0 nova_compute[185474]: 2026-01-05 14:46:34.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.451 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.451 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.451 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.452 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.553 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.656 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.658 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.752 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.754 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.858 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.860 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:35 compute-0 nova_compute[185474]: 2026-01-05 14:46:35.957 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.455 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.456 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5255MB free_disk=72.4249382019043GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.456 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.457 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.554 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.555 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.555 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.605 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.621 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.624 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:46:36 compute-0 nova_compute[185474]: 2026-01-05 14:46:36.624 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.749 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.751 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.752 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524bad0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:46:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:37.764 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 731f6e65-e951-4af3-aaf3-0322c02b154c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 14:46:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:38.220 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/731f6e65-e951-4af3-aaf3-0322c02b154c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 14:46:38 compute-0 nova_compute[185474]: 2026-01-05 14:46:38.472 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:38 compute-0 nova_compute[185474]: 2026-01-05 14:46:38.625 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:38 compute-0 nova_compute[185474]: 2026-01-05 14:46:38.626 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:46:38 compute-0 nova_compute[185474]: 2026-01-05 14:46:38.626 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:46:39 compute-0 nova_compute[185474]: 2026-01-05 14:46:39.258 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:39 compute-0 nova_compute[185474]: 2026-01-05 14:46:39.480 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:46:39 compute-0 nova_compute[185474]: 2026-01-05 14:46:39.480 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:46:39 compute-0 nova_compute[185474]: 2026-01-05 14:46:39.481 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:46:39 compute-0 nova_compute[185474]: 2026-01-05 14:46:39.481 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.503 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Mon, 05 Jan 2026 14:46:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-df586d2d-138f-4922-a348-4e8f3de98b46 x-openstack-request-id: req-df586d2d-138f-4922-a348-4e8f3de98b46 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.503 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "731f6e65-e951-4af3-aaf3-0322c02b154c", "name": "test_0", "status": "ACTIVE", "tenant_id": "54417029b2fb4b749e20754214013802", "user_id": "4c0cf318026a40748762c9e05cd1efe0", "metadata": {}, "hostId": "35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc", "image": {"id": "22e54d95-dd91-4f66-a65f-ce9984e648dc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/22e54d95-dd91-4f66-a65f-ce9984e648dc"}]}, "flavor": {"id": "afe04c80-f0ab-417e-844c-b5b05cc96b17", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/afe04c80-f0ab-417e-844c-b5b05cc96b17"}]}, "created": "2026-01-05T14:45:18Z", "updated": "2026-01-05T14:45:32Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.178", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:7f:70"}, {"version": 4, "addr": "192.168.122.228", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:7f:70"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/731f6e65-e951-4af3-aaf3-0322c02b154c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/731f6e65-e951-4af3-aaf3-0322c02b154c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T14:45:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.504 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/731f6e65-e951-4af3-aaf3-0322c02b154c used request id req-df586d2d-138f-4922-a348-4e8f3de98b46 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.506 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.507 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:46:39.507661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.598 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1711139806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.599 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.599 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.600 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.601 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:46:39.601128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.602 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.602 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:46:39.603703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.604 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.604 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.604 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.605 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.605 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.605 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:46:39.605842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.635 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.635 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.635 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.636 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:46:39.636945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.642 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 731f6e65-e951-4af3-aaf3-0322c02b154c / tapc6393a71-e6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.642 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.644 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:46:39.643909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.644 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:46:39.645992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.647 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.648 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.648 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:46:39.647368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.649 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.649 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:46:39.649417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.650 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.650 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.651 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 podman[240311]: 2026-01-05 14:46:39.651257934 +0000 UTC m=+0.124536495 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 podman[240312]: 2026-01-05 14:46:39.652723554 +0000 UTC m=+0.130827367 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:46:39.651487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.680 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 34140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:46:39.681490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.683 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:46:39.683085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.685 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T14:46:39.684789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.685 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.686 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.686 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:46:39.686772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.687 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.687 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T14:46:39.688310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.690 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.690 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.691 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:46:39.690316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.692 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:46:39.692656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.694 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:46:39.694134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:46:39.695603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.697 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:46:39.696836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:46:39.698371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:46:39.699691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 49.56640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:46:39.700770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.702 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:46:39.701853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.702 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.702 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:46:39.703593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.705 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.705 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:46:39.704681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.705 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.706 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.707 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:46:39.708 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:46:41 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:41.486 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:46:41 compute-0 nova_compute[185474]: 2026-01-05 14:46:41.487 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:41 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:41.490 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.770 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.790 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.790 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.790 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.791 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:42 compute-0 nova_compute[185474]: 2026-01-05 14:46:42.791 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:46:43 compute-0 nova_compute[185474]: 2026-01-05 14:46:43.475 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:43 compute-0 podman[240355]: 2026-01-05 14:46:43.617924124 +0000 UTC m=+0.102520237 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 14:46:44 compute-0 nova_compute[185474]: 2026-01-05 14:46:44.262 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:44.800 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:44.802 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:44.803 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:46 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:46.494 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.389 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.390 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.407 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.479 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.504 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.505 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.517 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.518 185478 INFO nova.compute.claims [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Claim successful on node compute-0.ctlplane.example.com
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.673 185478 DEBUG nova.compute.provider_tree [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.696 185478 DEBUG nova.scheduler.client.report [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.725 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.727 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.785 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.786 185478 DEBUG nova.network.neutron [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.808 185478 INFO nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.847 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.987 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.990 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.990 185478 INFO nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Creating image(s)
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.992 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.992 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:48 compute-0 nova_compute[185474]: 2026-01-05 14:46:48.994 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.023 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.118 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.120 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.121 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.145 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.238 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.239 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.265 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.288 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.288 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.289 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.377 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.378 185478 DEBUG nova.virt.disk.api [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking if we can resize image /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.378 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.471 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.472 185478 DEBUG nova.virt.disk.api [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Cannot resize image /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.472 185478 DEBUG nova.objects.instance [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'migration_context' on Instance uuid bdb0ea32-677c-48d8-ae08-c15ba402d14f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.516 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.517 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.518 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.534 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.626 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.628 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.629 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.653 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.748 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.749 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.803 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.805 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.806 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.901 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.903 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.904 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Ensure instance console log exists: /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.905 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.906 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:49 compute-0 nova_compute[185474]: 2026-01-05 14:46:49.907 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.444 185478 DEBUG nova.network.neutron [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Successfully updated port: 9e6c6e1b-0aed-450f-a239-509674dfe31f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.462 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.462 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.463 185478 DEBUG nova.network.neutron [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.555 185478 DEBUG nova.compute.manager [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-changed-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.555 185478 DEBUG nova.compute.manager [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Refreshing instance network info cache due to event network-changed-9e6c6e1b-0aed-450f-a239-509674dfe31f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.556 185478 DEBUG oslo_concurrency.lockutils [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:46:51 compute-0 podman[240403]: 2026-01-05 14:46:51.630732545 +0000 UTC m=+0.109562768 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Jan 05 14:46:51 compute-0 nova_compute[185474]: 2026-01-05 14:46:51.633 185478 DEBUG nova.network.neutron [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.277 185478 DEBUG nova.network.neutron [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.308 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.309 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Instance network_info: |[{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.309 185478 DEBUG oslo_concurrency.lockutils [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.310 185478 DEBUG nova.network.neutron [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Refreshing network info cache for port 9e6c6e1b-0aed-450f-a239-509674dfe31f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.316 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Start _get_guest_xml network_info=[{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.327 185478 WARNING nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.340 185478 DEBUG nova.virt.libvirt.host [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.341 185478 DEBUG nova.virt.libvirt.host [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.350 185478 DEBUG nova.virt.libvirt.host [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.351 185478 DEBUG nova.virt.libvirt.host [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.352 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.352 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T14:44:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='afe04c80-f0ab-417e-844c-b5b05cc96b17',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.353 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.354 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.354 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.355 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.355 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.356 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.356 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.357 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.357 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.358 185478 DEBUG nova.virt.hardware [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.364 185478 DEBUG nova.virt.libvirt.vif [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:46:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',id=2,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-17jyzkt5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:46:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 14:46:53 compute-0 nova_compute[185474]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bdb0ea32-677c-48d8-ae08-c15ba402d14f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.365 185478 DEBUG nova.network.os_vif_util [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.366 185478 DEBUG nova.network.os_vif_util [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.368 185478 DEBUG nova.objects.instance [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'pci_devices' on Instance uuid bdb0ea32-677c-48d8-ae08-c15ba402d14f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.391 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] End _get_guest_xml xml=<domain type="kvm">
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <uuid>bdb0ea32-677c-48d8-ae08-c15ba402d14f</uuid>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <name>instance-00000002</name>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <memory>524288</memory>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <metadata>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:name>vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq</nova:name>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 14:46:53</nova:creationTime>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:flavor name="m1.small">
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:memory>512</nova:memory>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:user uuid="4c0cf318026a40748762c9e05cd1efe0">admin</nova:user>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:project uuid="54417029b2fb4b749e20754214013802">admin</nova:project>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="22e54d95-dd91-4f66-a65f-ce9984e648dc"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         <nova:port uuid="9e6c6e1b-0aed-450f-a239-509674dfe31f">
Jan 05 14:46:53 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="192.168.0.224" ipVersion="4"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </metadata>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <system>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="serial">bdb0ea32-677c-48d8-ae08-c15ba402d14f</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="uuid">bdb0ea32-677c-48d8-ae08-c15ba402d14f</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </system>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <os>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </os>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <features>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <apic/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </features>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </clock>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <target dev="vdb" bus="virtio"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.config"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:4a:9f:84"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <target dev="tap9e6c6e1b-0a"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/console.log" append="off"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </serial>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <video>
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </video>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 14:46:53 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 14:46:53 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 14:46:53 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:46:53 compute-0 nova_compute[185474]: </domain>
Jan 05 14:46:53 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.392 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Preparing to wait for external event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.392 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.392 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.393 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.394 185478 DEBUG nova.virt.libvirt.vif [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:46:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',id=2,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-17jyzkt5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:46:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 05 14:46:53 compute-0 nova_compute[185474]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bdb0ea32-677c-48d8-ae08-c15ba402d14f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.394 185478 DEBUG nova.network.os_vif_util [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.395 185478 DEBUG nova.network.os_vif_util [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.396 185478 DEBUG os_vif [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.397 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.397 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.398 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.402 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.403 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e6c6e1b-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.403 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e6c6e1b-0a, col_values=(('external_ids', {'iface-id': '9e6c6e1b-0aed-450f-a239-509674dfe31f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:9f:84', 'vm-uuid': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.406 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:53 compute-0 NetworkManager[56139]: <info>  [1767624413.4076] manager: (tap9e6c6e1b-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.409 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.421 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.422 185478 INFO os_vif [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a')
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.480 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.493 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.494 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.494 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.494 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No VIF found with MAC fa:16:3e:4a:9f:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.495 185478 INFO nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Using config drive
Jan 05 14:46:53 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:46:53.364 185478 DEBUG nova.virt.libvirt.vif [None req-c0dafc56-53 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:46:53 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:46:53.394 185478 DEBUG nova.virt.libvirt.vif [None req-c0dafc56-53 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.880 185478 INFO nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Creating config drive at /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.config
Jan 05 14:46:53 compute-0 nova_compute[185474]: 2026-01-05 14:46:53.894 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjq9gcuf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.040 185478 DEBUG oslo_concurrency.processutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxjq9gcuf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:46:54 compute-0 kernel: tap9e6c6e1b-0a: entered promiscuous mode
Jan 05 14:46:54 compute-0 NetworkManager[56139]: <info>  [1767624414.1435] manager: (tap9e6c6e1b-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 05 14:46:54 compute-0 ovn_controller[97763]: 2026-01-05T14:46:54Z|00035|binding|INFO|Claiming lport 9e6c6e1b-0aed-450f-a239-509674dfe31f for this chassis.
Jan 05 14:46:54 compute-0 ovn_controller[97763]: 2026-01-05T14:46:54Z|00036|binding|INFO|9e6c6e1b-0aed-450f-a239-509674dfe31f: Claiming fa:16:3e:4a:9f:84 192.168.0.224
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.149 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.154 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.167 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:9f:84 192.168.0.224'], port_security=['fa:16:3e:4a:9f:84 192.168.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-m5q5u5dyljo6-j3mxrhypctaw-port-4zgpnsyftszn', 'neutron:cidrs': '192.168.0.224/24', 'neutron:device_id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-m5q5u5dyljo6-j3mxrhypctaw-port-4zgpnsyftszn', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=9e6c6e1b-0aed-450f-a239-509674dfe31f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.170 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 9e6c6e1b-0aed-450f-a239-509674dfe31f in datapath 905a1599-2980-4b24-9705-76e3c8a469ea bound to our chassis
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.174 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.176 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 ovn_controller[97763]: 2026-01-05T14:46:54Z|00037|binding|INFO|Setting lport 9e6c6e1b-0aed-450f-a239-509674dfe31f ovn-installed in OVS
Jan 05 14:46:54 compute-0 ovn_controller[97763]: 2026-01-05T14:46:54Z|00038|binding|INFO|Setting lport 9e6c6e1b-0aed-450f-a239-509674dfe31f up in Southbound
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.181 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.201 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3762fedf-46a7-4260-8fc4-53d0916cec73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 systemd-machined[156786]: New machine qemu-2-instance-00000002.
Jan 05 14:46:54 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 05 14:46:54 compute-0 systemd-udevd[240446]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.253 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[6e088f6c-4403-4182-b00d-712c132fc4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.257 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[c67b3a1f-c3f8-48a7-91e7-87869e7409aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 NetworkManager[56139]: <info>  [1767624414.2730] device (tap9e6c6e1b-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:46:54 compute-0 NetworkManager[56139]: <info>  [1767624414.2740] device (tap9e6c6e1b-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.292 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[49e11572-7c23-43a9-a276-f000d80602a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.313 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[184c6336-6887-4c60-a3a6-63896400178d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 29363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240457, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.334 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f86e2fe9-8c97-4486-8742-47ea051c5826]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240458, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240458, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.337 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.339 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.342 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.343 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.344 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.345 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:46:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:46:54.346 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.642 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624414.640047, bdb0ea32-677c-48d8-ae08-c15ba402d14f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.643 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] VM Started (Lifecycle Event)
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.668 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.678 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624414.6403477, bdb0ea32-677c-48d8-ae08-c15ba402d14f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.679 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] VM Paused (Lifecycle Event)
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.710 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.718 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.743 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.832 185478 DEBUG nova.network.neutron [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated VIF entry in instance network info cache for port 9e6c6e1b-0aed-450f-a239-509674dfe31f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.832 185478 DEBUG nova.network.neutron [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:46:54 compute-0 nova_compute[185474]: 2026-01-05 14:46:54.848 185478 DEBUG oslo_concurrency.lockutils [req-558fe931-21bd-46ba-87bf-71074a659947 req-b7be81b4-7c8b-4c0f-a5df-cf560c016686 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.185 185478 DEBUG nova.compute.manager [req-e424aa85-fd85-4018-9081-1e9dcafc2d74 req-3a395c64-899d-4977-abce-2f81df035520 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.185 185478 DEBUG oslo_concurrency.lockutils [req-e424aa85-fd85-4018-9081-1e9dcafc2d74 req-3a395c64-899d-4977-abce-2f81df035520 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.186 185478 DEBUG oslo_concurrency.lockutils [req-e424aa85-fd85-4018-9081-1e9dcafc2d74 req-3a395c64-899d-4977-abce-2f81df035520 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.186 185478 DEBUG oslo_concurrency.lockutils [req-e424aa85-fd85-4018-9081-1e9dcafc2d74 req-3a395c64-899d-4977-abce-2f81df035520 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.186 185478 DEBUG nova.compute.manager [req-e424aa85-fd85-4018-9081-1e9dcafc2d74 req-3a395c64-899d-4977-abce-2f81df035520 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Processing event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.187 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.192 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624415.1924725, bdb0ea32-677c-48d8-ae08-c15ba402d14f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.193 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] VM Resumed (Lifecycle Event)
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.196 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.203 185478 INFO nova.virt.libvirt.driver [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Instance spawned successfully.
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.204 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.215 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.335 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.340 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.340 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.340 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.341 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.341 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.341 185478 DEBUG nova.virt.libvirt.driver [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.366 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.432 185478 INFO nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Took 6.44 seconds to spawn the instance on the hypervisor.
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.432 185478 DEBUG nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.506 185478 INFO nova.compute.manager [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Took 7.04 seconds to build instance.
Jan 05 14:46:55 compute-0 nova_compute[185474]: 2026-01-05 14:46:55.521 185478 DEBUG oslo_concurrency.lockutils [None req-c0dafc56-5395-4fae-b779-fcb6e8349d83 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.316 185478 DEBUG nova.compute.manager [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.317 185478 DEBUG oslo_concurrency.lockutils [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.317 185478 DEBUG oslo_concurrency.lockutils [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.317 185478 DEBUG oslo_concurrency.lockutils [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.317 185478 DEBUG nova.compute.manager [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] No waiting events found dispatching network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:46:57 compute-0 nova_compute[185474]: 2026-01-05 14:46:57.317 185478 WARNING nova.compute.manager [req-ad11c7a3-e262-414f-99aa-99d524ed82b9 req-1079699c-5683-4cd1-bd49-e723d904716e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received unexpected event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f for instance with vm_state active and task_state None.
Jan 05 14:46:58 compute-0 nova_compute[185474]: 2026-01-05 14:46:58.408 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:58 compute-0 nova_compute[185474]: 2026-01-05 14:46:58.484 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:46:59 compute-0 podman[240466]: 2026-01-05 14:46:59.675628499 +0000 UTC m=+0.148187939 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:46:59 compute-0 podman[201880]: time="2026-01-05T14:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:46:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:46:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 05 14:47:01 compute-0 openstack_network_exporter[205179]: ERROR   14:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:47:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:47:01 compute-0 openstack_network_exporter[205179]: ERROR   14:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:47:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:47:01 compute-0 podman[240486]: 2026-01-05 14:47:01.763637553 +0000 UTC m=+0.233542627 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:47:03 compute-0 nova_compute[185474]: 2026-01-05 14:47:03.413 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:03 compute-0 nova_compute[185474]: 2026-01-05 14:47:03.487 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:04 compute-0 podman[240514]: 2026-01-05 14:47:04.678818288 +0000 UTC m=+0.156455893 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:47:04 compute-0 podman[240513]: 2026-01-05 14:47:04.708533456 +0000 UTC m=+0.184739671 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:47:08 compute-0 nova_compute[185474]: 2026-01-05 14:47:08.418 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:08 compute-0 nova_compute[185474]: 2026-01-05 14:47:08.491 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:10 compute-0 podman[240563]: 2026-01-05 14:47:10.675616032 +0000 UTC m=+0.144187260 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:47:10 compute-0 podman[240562]: 2026-01-05 14:47:10.704581639 +0000 UTC m=+0.177667609 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:47:13 compute-0 nova_compute[185474]: 2026-01-05 14:47:13.425 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:13 compute-0 nova_compute[185474]: 2026-01-05 14:47:13.494 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:14 compute-0 podman[240604]: 2026-01-05 14:47:14.604603838 +0000 UTC m=+0.088647500 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, vendor=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 14:47:18 compute-0 nova_compute[185474]: 2026-01-05 14:47:18.430 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:18 compute-0 nova_compute[185474]: 2026-01-05 14:47:18.498 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:22 compute-0 podman[240624]: 2026-01-05 14:47:22.649526612 +0000 UTC m=+0.125572714 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_managed=true)
Jan 05 14:47:23 compute-0 nova_compute[185474]: 2026-01-05 14:47:23.434 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:23 compute-0 nova_compute[185474]: 2026-01-05 14:47:23.502 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:24 compute-0 ovn_controller[97763]: 2026-01-05T14:47:24Z|00039|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 05 14:47:28 compute-0 nova_compute[185474]: 2026-01-05 14:47:28.440 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:28 compute-0 nova_compute[185474]: 2026-01-05 14:47:28.504 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:28 compute-0 ovn_controller[97763]: 2026-01-05T14:47:28Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:9f:84 192.168.0.224
Jan 05 14:47:28 compute-0 ovn_controller[97763]: 2026-01-05T14:47:28Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:9f:84 192.168.0.224
Jan 05 14:47:29 compute-0 podman[201880]: time="2026-01-05T14:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:47:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:47:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4357 "" "Go-http-client/1.1"
Jan 05 14:47:30 compute-0 podman[240652]: 2026-01-05 14:47:30.665937062 +0000 UTC m=+0.140375677 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64)
Jan 05 14:47:31 compute-0 openstack_network_exporter[205179]: ERROR   14:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:47:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:47:31 compute-0 openstack_network_exporter[205179]: ERROR   14:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:47:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:47:32 compute-0 podman[240676]: 2026-01-05 14:47:32.720835857 +0000 UTC m=+0.205372322 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 05 14:47:32 compute-0 sshd-session[240674]: Invalid user solv from 165.22.168.95 port 45236
Jan 05 14:47:32 compute-0 sshd-session[240674]: Connection closed by invalid user solv 165.22.168.95 port 45236 [preauth]
Jan 05 14:47:33 compute-0 nova_compute[185474]: 2026-01-05 14:47:33.443 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:33 compute-0 nova_compute[185474]: 2026-01-05 14:47:33.506 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:35 compute-0 nova_compute[185474]: 2026-01-05 14:47:35.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:35 compute-0 nova_compute[185474]: 2026-01-05 14:47:35.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:35 compute-0 nova_compute[185474]: 2026-01-05 14:47:35.537 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:35 compute-0 nova_compute[185474]: 2026-01-05 14:47:35.538 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:47:35 compute-0 podman[240700]: 2026-01-05 14:47:35.635827136 +0000 UTC m=+0.114238116 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 05 14:47:35 compute-0 podman[240699]: 2026-01-05 14:47:35.642387534 +0000 UTC m=+0.112601671 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.439 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.584 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.696 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.698 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.780 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.782 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.852 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.853 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.914 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.925 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.996 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:36 compute-0 nova_compute[185474]: 2026-01-05 14:47:36.999 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.096 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.098 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.191 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.193 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.295 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.826 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.827 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5081MB free_disk=72.40216827392578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.828 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.828 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.938 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.939 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.939 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:47:37 compute-0 nova_compute[185474]: 2026-01-05 14:47:37.939 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.002 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.022 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.054 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.055 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.448 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:38 compute-0 nova_compute[185474]: 2026-01-05 14:47:38.509 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.055 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.055 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.055 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.493 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.494 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.495 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:47:39 compute-0 nova_compute[185474]: 2026-01-05 14:47:39.495 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.292 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.323 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.324 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.326 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.326 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.327 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:41 compute-0 nova_compute[185474]: 2026-01-05 14:47:41.329 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:47:41 compute-0 podman[240767]: 2026-01-05 14:47:41.649949599 +0000 UTC m=+0.128941575 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:47:41 compute-0 podman[240766]: 2026-01-05 14:47:41.697901383 +0000 UTC m=+0.170465384 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 05 14:47:43 compute-0 nova_compute[185474]: 2026-01-05 14:47:43.453 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:43 compute-0 nova_compute[185474]: 2026-01-05 14:47:43.513 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:44 compute-0 podman[240808]: 2026-01-05 14:47:44.791384733 +0000 UTC m=+0.109420775 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_id=kepler, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Jan 05 14:47:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:47:44.802 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:47:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:47:44.804 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:47:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:47:44.804 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:47:48 compute-0 nova_compute[185474]: 2026-01-05 14:47:48.457 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:48 compute-0 nova_compute[185474]: 2026-01-05 14:47:48.516 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:53 compute-0 nova_compute[185474]: 2026-01-05 14:47:53.461 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:53 compute-0 nova_compute[185474]: 2026-01-05 14:47:53.520 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:53 compute-0 podman[240829]: 2026-01-05 14:47:53.632942807 +0000 UTC m=+0.113279200 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:47:58 compute-0 nova_compute[185474]: 2026-01-05 14:47:58.467 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:58 compute-0 nova_compute[185474]: 2026-01-05 14:47:58.523 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:47:59 compute-0 podman[201880]: time="2026-01-05T14:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:47:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:47:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4376 "" "Go-http-client/1.1"
Jan 05 14:48:01 compute-0 openstack_network_exporter[205179]: ERROR   14:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:48:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:48:01 compute-0 openstack_network_exporter[205179]: ERROR   14:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:48:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:48:01 compute-0 podman[240849]: 2026-01-05 14:48:01.643016123 +0000 UTC m=+0.121630487 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 14:48:03 compute-0 nova_compute[185474]: 2026-01-05 14:48:03.473 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:03 compute-0 nova_compute[185474]: 2026-01-05 14:48:03.527 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:03 compute-0 podman[240869]: 2026-01-05 14:48:03.697494667 +0000 UTC m=+0.179518949 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 14:48:06 compute-0 podman[240895]: 2026-01-05 14:48:06.634625018 +0000 UTC m=+0.109900487 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:48:06 compute-0 podman[240896]: 2026-01-05 14:48:06.641715161 +0000 UTC m=+0.112196160 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 14:48:08 compute-0 nova_compute[185474]: 2026-01-05 14:48:08.478 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:08 compute-0 nova_compute[185474]: 2026-01-05 14:48:08.531 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:12 compute-0 podman[240934]: 2026-01-05 14:48:12.652688588 +0000 UTC m=+0.130150167 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 14:48:12 compute-0 podman[240935]: 2026-01-05 14:48:12.676668281 +0000 UTC m=+0.145599618 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:48:13 compute-0 nova_compute[185474]: 2026-01-05 14:48:13.485 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:13 compute-0 nova_compute[185474]: 2026-01-05 14:48:13.535 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:15 compute-0 podman[240976]: 2026-01-05 14:48:15.648288209 +0000 UTC m=+0.115507331 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, release-0.7.12=)
Jan 05 14:48:18 compute-0 nova_compute[185474]: 2026-01-05 14:48:18.488 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:18 compute-0 nova_compute[185474]: 2026-01-05 14:48:18.539 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:23 compute-0 nova_compute[185474]: 2026-01-05 14:48:23.495 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:23 compute-0 nova_compute[185474]: 2026-01-05 14:48:23.541 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:24 compute-0 podman[240994]: 2026-01-05 14:48:24.660157481 +0000 UTC m=+0.133725825 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 14:48:28 compute-0 nova_compute[185474]: 2026-01-05 14:48:28.501 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:28 compute-0 nova_compute[185474]: 2026-01-05 14:48:28.544 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:29 compute-0 podman[201880]: time="2026-01-05T14:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:48:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:48:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 05 14:48:31 compute-0 openstack_network_exporter[205179]: ERROR   14:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:48:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:48:31 compute-0 openstack_network_exporter[205179]: ERROR   14:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:48:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:48:32 compute-0 podman[241012]: 2026-01-05 14:48:32.648916587 +0000 UTC m=+0.130527609 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Jan 05 14:48:33 compute-0 nova_compute[185474]: 2026-01-05 14:48:33.507 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:33 compute-0 nova_compute[185474]: 2026-01-05 14:48:33.546 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:34 compute-0 podman[241033]: 2026-01-05 14:48:34.743635594 +0000 UTC m=+0.220160146 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 05 14:48:36 compute-0 nova_compute[185474]: 2026-01-05 14:48:36.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:36 compute-0 nova_compute[185474]: 2026-01-05 14:48:36.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:36 compute-0 nova_compute[185474]: 2026-01-05 14:48:36.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.440 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.442 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.443 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.444 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.577 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:37 compute-0 podman[241060]: 2026-01-05 14:48:37.610450261 +0000 UTC m=+0.080865918 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:48:37 compute-0 podman[241059]: 2026-01-05 14:48:37.629007856 +0000 UTC m=+0.116927190 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.655 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.656 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.739 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.741 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.750 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.750 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.750 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.751 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb524b9b0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.757 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.759 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bdb0ea32-677c-48d8-ae08-c15ba402d14f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 14:48:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:37.760 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bdb0ea32-677c-48d8-ae08-c15ba402d14f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.816 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.818 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.916 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:37 compute-0 nova_compute[185474]: 2026-01-05 14:48:37.934 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.017 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.018 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.078 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.080 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.152 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.154 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.221 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.511 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.549 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.753 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.756 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=72.40216827392578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.756 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.758 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.867 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.868 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.869 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.869 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.895 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 05 Jan 2026 14:48:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3ef9579e-b329-490d-ae5b-cb31e7c22fc2 x-openstack-request-id: req-3ef9579e-b329-490d-ae5b-cb31e7c22fc2 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.896 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bdb0ea32-677c-48d8-ae08-c15ba402d14f", "name": "vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq", "status": "ACTIVE", "tenant_id": "54417029b2fb4b749e20754214013802", "user_id": "4c0cf318026a40748762c9e05cd1efe0", "metadata": {"metering.server_group": "fb98dcdd-a12e-44ca-97ca-fe43134a3faa"}, "hostId": "35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc", "image": {"id": "22e54d95-dd91-4f66-a65f-ce9984e648dc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/22e54d95-dd91-4f66-a65f-ce9984e648dc"}]}, "flavor": {"id": "afe04c80-f0ab-417e-844c-b5b05cc96b17", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/afe04c80-f0ab-417e-844c-b5b05cc96b17"}]}, "created": "2026-01-05T14:46:46Z", "updated": "2026-01-05T14:46:55Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.224", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4a:9f:84"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4a:9f:84"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bdb0ea32-677c-48d8-ae08-c15ba402d14f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bdb0ea32-677c-48d8-ae08-c15ba402d14f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T14:46:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.896 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bdb0ea32-677c-48d8-ae08-c15ba402d14f used request id req-3ef9579e-b329-490d-ae5b-cb31e7c22fc2 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.898 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'name': 'vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:38.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:48:38.899378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.961 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.990 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.994 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:48:38 compute-0 nova_compute[185474]: 2026-01-05 14:48:38.996 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.008 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.009 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.009 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.100 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 1218102464 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.101 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 12433569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.102 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.104 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.105 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:48:39.104520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.106 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.107 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 601656532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.107 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 105953551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.108 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 68177111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.110 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.111 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.111 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.112 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.112 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.113 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:48:39.110425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:48:39.116006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.149 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.150 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.150 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.194 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.194 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.195 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:48:39.196425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.200 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.203 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bdb0ea32-677c-48d8-ae08-c15ba402d14f / tap9e6c6e1b-0a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.203 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.204 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:48:39.204944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.205 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.205 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.206 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.206 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.206 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.206 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.209 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.209 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.209 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.209 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.210 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.210 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.210 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.210 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:48:39.207943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:48:39.209137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.212 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.213 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.213 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.213 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.213 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.214 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.214 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:48:39.212573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.214 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.215 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.215 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:48:39.215662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.246 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 36050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.289 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/cpu volume: 56160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.291 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.291 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.292 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.292 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.293 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:48:39.291968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:48:39.296669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.297 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.298 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.301 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T14:48:39.300908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.302 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq>]
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.304 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:48:39.304878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.308 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.309 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq>]
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T14:48:39.308424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.312 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.313 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:48:39.311880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.316 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:48:39.316453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.318 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.319 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.320 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.321 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:48:39.320463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.325 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.325 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.329 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:48:39.324776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.330 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:48:39.328755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.331 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes volume: 4690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.333 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.334 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.335 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.336 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.336 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:48:39.333251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:48:39.335860) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.338 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.94140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.338 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/memory.usage volume: 49.13671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:48:39.338008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.340 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.340 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.340 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.341 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.341 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.341 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.343 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.343 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.344 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.345 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:48:39.339979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:48:39.342835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:48:39.344307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.346 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.346 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:48:39.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:48:39 compute-0 nova_compute[185474]: 2026-01-05 14:48:39.997 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:39 compute-0 nova_compute[185474]: 2026-01-05 14:48:39.998 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:48:40 compute-0 nova_compute[185474]: 2026-01-05 14:48:40.887 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:48:40 compute-0 nova_compute[185474]: 2026-01-05 14:48:40.887 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:48:40 compute-0 nova_compute[185474]: 2026-01-05 14:48:40.888 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.386 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.405 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.406 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.406 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.407 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.407 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.408 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.408 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.520 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:43 compute-0 nova_compute[185474]: 2026-01-05 14:48:43.553 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:43 compute-0 podman[241125]: 2026-01-05 14:48:43.580948176 +0000 UTC m=+0.076328807 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:48:43 compute-0 podman[241126]: 2026-01-05 14:48:43.610372526 +0000 UTC m=+0.097439211 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:48:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:48:44.803 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:48:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:48:44.804 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:48:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:48:44.804 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:48:46 compute-0 podman[241166]: 2026-01-05 14:48:46.599872928 +0000 UTC m=+0.091151128 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container)
Jan 05 14:48:48 compute-0 nova_compute[185474]: 2026-01-05 14:48:48.525 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:48 compute-0 nova_compute[185474]: 2026-01-05 14:48:48.554 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:53 compute-0 nova_compute[185474]: 2026-01-05 14:48:53.531 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:53 compute-0 nova_compute[185474]: 2026-01-05 14:48:53.557 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 14:48:55 compute-0 podman[241188]: 2026-01-05 14:48:55.178537969 +0000 UTC m=+0.125199485 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:48:58 compute-0 nova_compute[185474]: 2026-01-05 14:48:58.535 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:58 compute-0 nova_compute[185474]: 2026-01-05 14:48:58.559 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:48:59 compute-0 podman[201880]: time="2026-01-05T14:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:48:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:48:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4370 "" "Go-http-client/1.1"
Jan 05 14:49:01 compute-0 openstack_network_exporter[205179]: ERROR   14:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:49:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:49:01 compute-0 openstack_network_exporter[205179]: ERROR   14:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:49:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:49:03 compute-0 nova_compute[185474]: 2026-01-05 14:49:03.539 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:03 compute-0 nova_compute[185474]: 2026-01-05 14:49:03.564 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:03 compute-0 podman[241213]: 2026-01-05 14:49:03.657401255 +0000 UTC m=+0.134332693 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Jan 05 14:49:04 compute-0 sshd-session[241235]: Connection closed by 34.229.16.67 port 54168 [preauth]
Jan 05 14:49:05 compute-0 podman[241237]: 2026-01-05 14:49:05.728539372 +0000 UTC m=+0.205483428 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 05 14:49:08 compute-0 nova_compute[185474]: 2026-01-05 14:49:08.546 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:08 compute-0 nova_compute[185474]: 2026-01-05 14:49:08.563 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:08 compute-0 podman[241262]: 2026-01-05 14:49:08.613996046 +0000 UTC m=+0.092514556 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:49:08 compute-0 podman[241263]: 2026-01-05 14:49:08.626355211 +0000 UTC m=+0.111223365 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:49:13 compute-0 nova_compute[185474]: 2026-01-05 14:49:13.557 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:13 compute-0 nova_compute[185474]: 2026-01-05 14:49:13.564 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:14 compute-0 podman[241301]: 2026-01-05 14:49:14.617468746 +0000 UTC m=+0.099765773 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 14:49:14 compute-0 podman[241302]: 2026-01-05 14:49:14.648628293 +0000 UTC m=+0.115865701 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:49:17 compute-0 podman[241343]: 2026-01-05 14:49:17.668950134 +0000 UTC m=+0.153461663 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, distribution-scope=public)
Jan 05 14:49:18 compute-0 nova_compute[185474]: 2026-01-05 14:49:18.562 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:18 compute-0 nova_compute[185474]: 2026-01-05 14:49:18.568 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:23 compute-0 nova_compute[185474]: 2026-01-05 14:49:23.569 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:23 compute-0 nova_compute[185474]: 2026-01-05 14:49:23.573 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:49:25 compute-0 podman[241364]: 2026-01-05 14:49:25.664856051 +0000 UTC m=+0.135046173 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true)
Jan 05 14:49:28 compute-0 nova_compute[185474]: 2026-01-05 14:49:28.572 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:49:29 compute-0 podman[201880]: time="2026-01-05T14:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:49:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:49:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 05 14:49:31 compute-0 openstack_network_exporter[205179]: ERROR   14:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:49:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:49:31 compute-0 openstack_network_exporter[205179]: ERROR   14:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:49:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:49:33 compute-0 nova_compute[185474]: 2026-01-05 14:49:33.576 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:34 compute-0 podman[241383]: 2026-01-05 14:49:34.637344517 +0000 UTC m=+0.124833205 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350)
Jan 05 14:49:36 compute-0 nova_compute[185474]: 2026-01-05 14:49:36.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:36 compute-0 nova_compute[185474]: 2026-01-05 14:49:36.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:36 compute-0 nova_compute[185474]: 2026-01-05 14:49:36.431 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:36 compute-0 nova_compute[185474]: 2026-01-05 14:49:36.432 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:49:36 compute-0 podman[241402]: 2026-01-05 14:49:36.671925059 +0000 UTC m=+0.163458434 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.440 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.441 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.441 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.561 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.596 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.667 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.669 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.738 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.741 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.811 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.813 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.905 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:38 compute-0 nova_compute[185474]: 2026-01-05 14:49:38.916 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.010 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.011 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.104 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.107 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.176 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.177 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.244 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:49:39 compute-0 podman[241452]: 2026-01-05 14:49:39.592289973 +0000 UTC m=+0.084663822 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:49:39 compute-0 podman[241453]: 2026-01-05 14:49:39.614484667 +0000 UTC m=+0.098308694 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.631 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.633 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5061MB free_disk=72.40216827392578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.633 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.633 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.992 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.992 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.992 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:49:39 compute-0 nova_compute[185474]: 2026-01-05 14:49:39.992 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:49:40 compute-0 nova_compute[185474]: 2026-01-05 14:49:40.066 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:49:40 compute-0 nova_compute[185474]: 2026-01-05 14:49:40.079 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:49:40 compute-0 nova_compute[185474]: 2026-01-05 14:49:40.082 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:49:40 compute-0 nova_compute[185474]: 2026-01-05 14:49:40.082 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:49:41 compute-0 nova_compute[185474]: 2026-01-05 14:49:41.081 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:41 compute-0 nova_compute[185474]: 2026-01-05 14:49:41.082 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:49:41 compute-0 nova_compute[185474]: 2026-01-05 14:49:41.082 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:49:42 compute-0 nova_compute[185474]: 2026-01-05 14:49:42.273 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:49:42 compute-0 nova_compute[185474]: 2026-01-05 14:49:42.274 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:49:42 compute-0 nova_compute[185474]: 2026-01-05 14:49:42.274 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:49:42 compute-0 nova_compute[185474]: 2026-01-05 14:49:42.275 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:49:43 compute-0 nova_compute[185474]: 2026-01-05 14:49:43.583 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:43 compute-0 nova_compute[185474]: 2026-01-05 14:49:43.607 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:44 compute-0 podman[241491]: 2026-01-05 14:49:44.785908637 +0000 UTC m=+0.093606806 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 14:49:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:49:44.805 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:49:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:49:44.805 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:49:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:49:44.805 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:49:44 compute-0 podman[241492]: 2026-01-05 14:49:44.83016361 +0000 UTC m=+0.126279684 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.516 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.535 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.536 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.536 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.537 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.537 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:45 compute-0 nova_compute[185474]: 2026-01-05 14:49:45.537 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:49:48 compute-0 nova_compute[185474]: 2026-01-05 14:49:48.587 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:48 compute-0 nova_compute[185474]: 2026-01-05 14:49:48.610 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:48 compute-0 podman[241533]: 2026-01-05 14:49:48.655096945 +0000 UTC m=+0.134815996 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, config_id=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0)
Jan 05 14:49:53 compute-0 nova_compute[185474]: 2026-01-05 14:49:53.594 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:53 compute-0 nova_compute[185474]: 2026-01-05 14:49:53.613 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:56 compute-0 podman[241554]: 2026-01-05 14:49:56.659835741 +0000 UTC m=+0.131070563 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 05 14:49:58 compute-0 nova_compute[185474]: 2026-01-05 14:49:58.597 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:58 compute-0 nova_compute[185474]: 2026-01-05 14:49:58.615 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:49:59 compute-0 podman[201880]: time="2026-01-05T14:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:49:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:49:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 05 14:50:01 compute-0 openstack_network_exporter[205179]: ERROR   14:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:50:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:50:01 compute-0 openstack_network_exporter[205179]: ERROR   14:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:50:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:50:03 compute-0 nova_compute[185474]: 2026-01-05 14:50:03.599 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:03 compute-0 nova_compute[185474]: 2026-01-05 14:50:03.619 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:05 compute-0 podman[241574]: 2026-01-05 14:50:05.653243017 +0000 UTC m=+0.130377746 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:50:07 compute-0 podman[241593]: 2026-01-05 14:50:07.673822638 +0000 UTC m=+0.158013716 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:50:08 compute-0 nova_compute[185474]: 2026-01-05 14:50:08.601 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:08 compute-0 nova_compute[185474]: 2026-01-05 14:50:08.622 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:10 compute-0 podman[241616]: 2026-01-05 14:50:10.622389589 +0000 UTC m=+0.103614188 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:50:10 compute-0 podman[241617]: 2026-01-05 14:50:10.667918956 +0000 UTC m=+0.132840733 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 14:50:13 compute-0 nova_compute[185474]: 2026-01-05 14:50:13.604 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:13 compute-0 nova_compute[185474]: 2026-01-05 14:50:13.626 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:15 compute-0 podman[241654]: 2026-01-05 14:50:15.625263866 +0000 UTC m=+0.113434284 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:50:15 compute-0 podman[241655]: 2026-01-05 14:50:15.670502417 +0000 UTC m=+0.144025397 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:50:18 compute-0 nova_compute[185474]: 2026-01-05 14:50:18.611 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:18 compute-0 nova_compute[185474]: 2026-01-05 14:50:18.629 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:19 compute-0 podman[241699]: 2026-01-05 14:50:19.610178181 +0000 UTC m=+0.088695063 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 14:50:23 compute-0 nova_compute[185474]: 2026-01-05 14:50:23.614 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:23 compute-0 nova_compute[185474]: 2026-01-05 14:50:23.633 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:25 compute-0 sshd-session[241717]: Invalid user solv from 165.22.168.95 port 54470
Jan 05 14:50:25 compute-0 sshd-session[241717]: Connection closed by invalid user solv 165.22.168.95 port 54470 [preauth]
Jan 05 14:50:27 compute-0 podman[241720]: 2026-01-05 14:50:27.642377214 +0000 UTC m=+0.109955730 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 14:50:28 compute-0 nova_compute[185474]: 2026-01-05 14:50:28.623 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:28 compute-0 nova_compute[185474]: 2026-01-05 14:50:28.637 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:29 compute-0 podman[201880]: time="2026-01-05T14:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:50:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:50:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 05 14:50:31 compute-0 openstack_network_exporter[205179]: ERROR   14:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:50:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:50:31 compute-0 openstack_network_exporter[205179]: ERROR   14:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:50:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:50:33 compute-0 nova_compute[185474]: 2026-01-05 14:50:33.619 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:33 compute-0 nova_compute[185474]: 2026-01-05 14:50:33.640 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:36 compute-0 podman[241741]: 2026-01-05 14:50:36.651892987 +0000 UTC m=+0.116597700 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter)
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.751 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.752 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb52496a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.764 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.768 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'name': 'vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.768 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.768 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.769 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:50:37.768637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.868 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.869 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.869 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.972 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 1225483066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.972 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 12433569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.972 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.973 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.973 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.973 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.974 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.974 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.974 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:50:37.973975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.975 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 601656532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.975 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 105953551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.975 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 68177111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:50:37.976605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.977 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.978 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.978 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:37.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:50:37.979141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.010 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.010 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.011 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.054 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.054 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.055 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.056 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:50:38.056605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.062 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.067 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:50:38.068442) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.069 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.069 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.069 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.070 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.070 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.070 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.071 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:50:38.072144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:50:38.074082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.074 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.075 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.075 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.075 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.076 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.076 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.077 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:50:38.077762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.078 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.078 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.079 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.079 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.079 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.080 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.080 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:50:38.081169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.113 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 37970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.140 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/cpu volume: 174790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.140 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.140 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.141 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.142 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:50:38.141681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.142 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.143 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:50:38.144048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.144 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.145 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.145 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.146 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:50:38.146744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:50:38.148900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.149 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.150 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.150 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:50:38.151176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.151 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.152 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.153 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:50:38.153438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.154 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.154 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:50:38.155735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.156 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.156 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.157 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:50:38.157807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.158 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes volume: 4760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:50:38.160012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.160 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.161 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.162 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:50:38.162504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.163 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.163 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.165 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.94140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:50:38.164653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.165 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.166 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:50:38.166834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.167 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.167 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.168 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.168 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.168 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.169 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:50:38.170489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.171 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.171 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:50:38.172375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.172 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.173 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.173 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.173 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.173 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.173 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.174 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.175 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.176 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.177 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:50:38.178 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.621 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:38 compute-0 nova_compute[185474]: 2026-01-05 14:50:38.643 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:38 compute-0 podman[241762]: 2026-01-05 14:50:38.649776132 +0000 UTC m=+0.140954113 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.401 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.449 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.451 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.452 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.453 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.574 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.640 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.643 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.707 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.710 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.805 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.807 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.907 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.915 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.993 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:39 compute-0 nova_compute[185474]: 2026-01-05 14:50:39.994 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.091 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.093 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.187 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.188 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.260 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.730 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.731 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5060MB free_disk=72.40118789672852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.732 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.732 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.904 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.904 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.905 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.905 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:50:40 compute-0 nova_compute[185474]: 2026-01-05 14:50:40.981 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.059 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.060 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.079 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.116 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.190 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.210 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.212 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:50:41 compute-0 nova_compute[185474]: 2026-01-05 14:50:41.213 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:50:41 compute-0 podman[241813]: 2026-01-05 14:50:41.668145729 +0000 UTC m=+0.135205355 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 14:50:41 compute-0 podman[241814]: 2026-01-05 14:50:41.669055614 +0000 UTC m=+0.134794974 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 14:50:42 compute-0 nova_compute[185474]: 2026-01-05 14:50:42.212 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:42 compute-0 nova_compute[185474]: 2026-01-05 14:50:42.213 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:50:43 compute-0 nova_compute[185474]: 2026-01-05 14:50:43.291 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:50:43 compute-0 nova_compute[185474]: 2026-01-05 14:50:43.292 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:50:43 compute-0 nova_compute[185474]: 2026-01-05 14:50:43.293 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:50:43 compute-0 nova_compute[185474]: 2026-01-05 14:50:43.623 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:43 compute-0 nova_compute[185474]: 2026-01-05 14:50:43.647 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:50:44.806 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:50:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:50:44.806 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:50:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:50:44.807 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.642 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:50:46 compute-0 podman[241854]: 2026-01-05 14:50:46.647577695 +0000 UTC m=+0.113838724 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:50:46 compute-0 podman[241853]: 2026-01-05 14:50:46.658714953 +0000 UTC m=+0.130142315 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.663 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.663 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.664 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.665 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.665 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.666 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.666 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.679 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.679 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.680 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 14:50:46 compute-0 nova_compute[185474]: 2026-01-05 14:50:46.691 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:50:48 compute-0 nova_compute[185474]: 2026-01-05 14:50:48.625 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:48 compute-0 nova_compute[185474]: 2026-01-05 14:50:48.649 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:50 compute-0 podman[241894]: 2026-01-05 14:50:50.627761544 +0000 UTC m=+0.114889874 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:50:53 compute-0 nova_compute[185474]: 2026-01-05 14:50:53.627 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:53 compute-0 nova_compute[185474]: 2026-01-05 14:50:53.651 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:50:57.864 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:50:57 compute-0 nova_compute[185474]: 2026-01-05 14:50:57.868 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:50:57.870 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:50:58 compute-0 nova_compute[185474]: 2026-01-05 14:50:58.632 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:58 compute-0 podman[241914]: 2026-01-05 14:50:58.639524031 +0000 UTC m=+0.121935988 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 14:50:58 compute-0 nova_compute[185474]: 2026-01-05 14:50:58.654 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:50:59 compute-0 podman[201880]: time="2026-01-05T14:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:50:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:50:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 05 14:51:01 compute-0 openstack_network_exporter[205179]: ERROR   14:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:51:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:51:01 compute-0 openstack_network_exporter[205179]: ERROR   14:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:51:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.832 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.833 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.852 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.933 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.935 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.946 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 14:51:02 compute-0 nova_compute[185474]: 2026-01-05 14:51:02.947 185478 INFO nova.compute.claims [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Claim successful on node compute-0.ctlplane.example.com
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.155 185478 DEBUG nova.compute.provider_tree [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.182 185478 DEBUG nova.scheduler.client.report [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.210 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.212 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.280 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.281 185478 DEBUG nova.network.neutron [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.300 185478 INFO nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.342 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.445 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.448 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.449 185478 INFO nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Creating image(s)
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.451 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.451 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.453 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.476 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.561 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.563 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.565 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.594 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.634 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.659 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.672 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.674 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.724 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.726 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.727 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.802 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.805 185478 DEBUG nova.virt.disk.api [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking if we can resize image /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.807 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.883 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.887 185478 DEBUG nova.virt.disk.api [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Cannot resize image /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.888 185478 DEBUG nova.objects.instance [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'migration_context' on Instance uuid f927dce2-97db-41ff-a7bc-a34d4e7486d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.920 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.927 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.929 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:03 compute-0 nova_compute[185474]: 2026-01-05 14:51:03.951 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.052 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.055 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.057 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.082 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.149 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.151 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.193 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.195 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.196 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.255 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.257 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.257 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Ensure instance console log exists: /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.258 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.259 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:04 compute-0 nova_compute[185474]: 2026-01-05 14:51:04.259 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:05 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:05.874 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:07 compute-0 podman[241960]: 2026-01-05 14:51:07.61482541 +0000 UTC m=+0.105415803 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.346 185478 DEBUG nova.network.neutron [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Successfully updated port: 4d2a5913-5bee-4ecb-8f19-5653e42acc47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.373 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.374 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquired lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.376 185478 DEBUG nova.network.neutron [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.475 185478 DEBUG nova.compute.manager [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-changed-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.476 185478 DEBUG nova.compute.manager [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Refreshing instance network info cache due to event network-changed-4d2a5913-5bee-4ecb-8f19-5653e42acc47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.477 185478 DEBUG oslo_concurrency.lockutils [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.545 185478 DEBUG nova.network.neutron [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.636 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:08 compute-0 nova_compute[185474]: 2026-01-05 14:51:08.663 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:09 compute-0 podman[241982]: 2026-01-05 14:51:09.664576753 +0000 UTC m=+0.156528005 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.799 185478 DEBUG nova.network.neutron [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.821 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Releasing lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.822 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Instance network_info: |[{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.823 185478 DEBUG oslo_concurrency.lockutils [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.824 185478 DEBUG nova.network.neutron [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Refreshing network info cache for port 4d2a5913-5bee-4ecb-8f19-5653e42acc47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.831 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Start _get_guest_xml network_info=[{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.841 185478 WARNING nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.851 185478 DEBUG nova.virt.libvirt.host [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.852 185478 DEBUG nova.virt.libvirt.host [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.856 185478 DEBUG nova.virt.libvirt.host [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.857 185478 DEBUG nova.virt.libvirt.host [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.857 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.858 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T14:44:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='afe04c80-f0ab-417e-844c-b5b05cc96b17',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.859 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.859 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.860 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.860 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.860 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.861 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.861 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.862 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.862 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.863 185478 DEBUG nova.virt.hardware [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.866 185478 DEBUG nova.virt.libvirt.vif [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:51:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',id=3,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-04w16ma5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:51:03Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk5ODEwMTk4NjM4NzUzNjQ0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 14:51:09 compute-0 nova_compute[185474]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk5ODEwMTk4NjM4NzUzNjQ0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=f927dce2-97db-41ff-a7bc-a34d4e7486d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.867 185478 DEBUG nova.network.os_vif_util [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.868 185478 DEBUG nova.network.os_vif_util [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.869 185478 DEBUG nova.objects.instance [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'pci_devices' on Instance uuid f927dce2-97db-41ff-a7bc-a34d4e7486d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.885 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] End _get_guest_xml xml=<domain type="kvm">
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <uuid>f927dce2-97db-41ff-a7bc-a34d4e7486d4</uuid>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <name>instance-00000003</name>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <memory>524288</memory>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <metadata>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:name>vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek</nova:name>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 14:51:09</nova:creationTime>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:flavor name="m1.small">
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:memory>512</nova:memory>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:user uuid="4c0cf318026a40748762c9e05cd1efe0">admin</nova:user>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:project uuid="54417029b2fb4b749e20754214013802">admin</nova:project>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="22e54d95-dd91-4f66-a65f-ce9984e648dc"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         <nova:port uuid="4d2a5913-5bee-4ecb-8f19-5653e42acc47">
Jan 05 14:51:09 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="192.168.0.34" ipVersion="4"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </metadata>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <system>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="serial">f927dce2-97db-41ff-a7bc-a34d4e7486d4</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="uuid">f927dce2-97db-41ff-a7bc-a34d4e7486d4</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </system>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <os>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </os>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <features>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <apic/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </features>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </clock>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <target dev="vdb" bus="virtio"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.config"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:84:98:05"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <target dev="tap4d2a5913-5b"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/console.log" append="off"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </serial>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <video>
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </video>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 14:51:09 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 14:51:09 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 14:51:09 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:51:09 compute-0 nova_compute[185474]: </domain>
Jan 05 14:51:09 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.898 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Preparing to wait for external event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.899 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.899 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.899 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.900 185478 DEBUG nova.virt.libvirt.vif [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:51:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',id=3,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-04w16ma5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:51:03Z,user_data='Content-Type: multipart/mixed; boundary="===============3998101986387536441=="
MIME-Version: 1.0

--===============3998101986387536441==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============3998101986387536441==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============3998101986387536441==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============3998101986387536441==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============3998101986387536441==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============3998101986387536441==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============3998101986387536441==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============3998101986387536441==--
',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=f927dce2-97db-41ff-a7bc-a34d4e7486d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.900 185478 DEBUG nova.network.os_vif_util [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.901 185478 DEBUG nova.network.os_vif_util [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.901 185478 DEBUG os_vif [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.902 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.902 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.902 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.906 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.907 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d2a5913-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.908 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d2a5913-5b, col_values=(('external_ids', {'iface-id': '4d2a5913-5bee-4ecb-8f19-5653e42acc47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:98:05', 'vm-uuid': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:09 compute-0 NetworkManager[56139]: <info>  [1767624669.9114] manager: (tap4d2a5913-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 05 14:51:09 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:51:09.866 185478 DEBUG nova.virt.libvirt.vif [None req-6da4c383-a2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.910 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.916 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.919 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.920 185478 INFO os_vif [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b')
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.986 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.987 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.987 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.988 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No VIF found with MAC fa:16:3e:84:98:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 14:51:09 compute-0 nova_compute[185474]: 2026-01-05 14:51:09.988 185478 INFO nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Using config drive
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.564 185478 INFO nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Creating config drive at /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.config
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.570 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4g68dv0f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.707 185478 DEBUG oslo_concurrency.processutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4g68dv0f" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:10 compute-0 kernel: tap4d2a5913-5b: entered promiscuous mode
Jan 05 14:51:10 compute-0 NetworkManager[56139]: <info>  [1767624670.7956] manager: (tap4d2a5913-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 05 14:51:10 compute-0 ovn_controller[97763]: 2026-01-05T14:51:10Z|00040|binding|INFO|Claiming lport 4d2a5913-5bee-4ecb-8f19-5653e42acc47 for this chassis.
Jan 05 14:51:10 compute-0 ovn_controller[97763]: 2026-01-05T14:51:10Z|00041|binding|INFO|4d2a5913-5bee-4ecb-8f19-5653e42acc47: Claiming fa:16:3e:84:98:05 192.168.0.34
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.800 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.815 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:98:05 192.168.0.34'], port_security=['fa:16:3e:84:98:05 192.168.0.34'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-xcfguwxpygfw-nks53nwkysgu-port-2omiqc7m4ytm', 'neutron:cidrs': '192.168.0.34/24', 'neutron:device_id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-xcfguwxpygfw-nks53nwkysgu-port-2omiqc7m4ytm', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=4d2a5913-5bee-4ecb-8f19-5653e42acc47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.816 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 4d2a5913-5bee-4ecb-8f19-5653e42acc47 in datapath 905a1599-2980-4b24-9705-76e3c8a469ea bound to our chassis
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.817 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:51:10 compute-0 ovn_controller[97763]: 2026-01-05T14:51:10Z|00042|binding|INFO|Setting lport 4d2a5913-5bee-4ecb-8f19-5653e42acc47 ovn-installed in OVS
Jan 05 14:51:10 compute-0 ovn_controller[97763]: 2026-01-05T14:51:10Z|00043|binding|INFO|Setting lport 4d2a5913-5bee-4ecb-8f19-5653e42acc47 up in Southbound
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.823 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:10 compute-0 nova_compute[185474]: 2026-01-05 14:51:10.827 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:10 compute-0 systemd-machined[156786]: New machine qemu-3-instance-00000003.
Jan 05 14:51:10 compute-0 systemd-udevd[242029]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.848 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f32a1b-59c8-487a-9321-74ddc2a8e88f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 05 14:51:10 compute-0 NetworkManager[56139]: <info>  [1767624670.8706] device (tap4d2a5913-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:51:10 compute-0 NetworkManager[56139]: <info>  [1767624670.8825] device (tap4d2a5913-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.899 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[8b1e8ff6-ef41-4ca3-9801-bc4a29fc5d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.903 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa99b87-663d-468b-80a7-aea63f6a7a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.936 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9cb13a-b131-4b1e-af6b-e76db12af759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.968 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[16f00880-4811-4a78-a09b-588990597f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 29363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242041, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.996 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[d0de8b94-446f-401b-b892-8c7ee12be5f5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242043, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242043, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:51:10 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:10.998 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.001 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:11.004 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:11.005 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:51:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:11.006 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:51:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:11.007 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.197 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624671.1965244, f927dce2-97db-41ff-a7bc-a34d4e7486d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.198 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] VM Started (Lifecycle Event)
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.222 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.234 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624671.196734, f927dce2-97db-41ff-a7bc-a34d4e7486d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.235 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] VM Paused (Lifecycle Event)
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.267 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.280 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.305 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.323 185478 DEBUG nova.compute.manager [req-69bd7300-ac75-4982-a010-979afa36ccb8 req-3fb4debc-7035-4caf-845b-6790d09f0138 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.325 185478 DEBUG oslo_concurrency.lockutils [req-69bd7300-ac75-4982-a010-979afa36ccb8 req-3fb4debc-7035-4caf-845b-6790d09f0138 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.326 185478 DEBUG oslo_concurrency.lockutils [req-69bd7300-ac75-4982-a010-979afa36ccb8 req-3fb4debc-7035-4caf-845b-6790d09f0138 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.327 185478 DEBUG oslo_concurrency.lockutils [req-69bd7300-ac75-4982-a010-979afa36ccb8 req-3fb4debc-7035-4caf-845b-6790d09f0138 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.328 185478 DEBUG nova.compute.manager [req-69bd7300-ac75-4982-a010-979afa36ccb8 req-3fb4debc-7035-4caf-845b-6790d09f0138 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Processing event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.330 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.338 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624671.3381686, f927dce2-97db-41ff-a7bc-a34d4e7486d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.339 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] VM Resumed (Lifecycle Event)
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.342 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.352 185478 INFO nova.virt.libvirt.driver [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Instance spawned successfully.
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.353 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.360 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.374 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.385 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.387 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.388 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.390 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.391 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.393 185478 DEBUG nova.virt.libvirt.driver [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.400 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.424 185478 DEBUG nova.network.neutron [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updated VIF entry in instance network info cache for port 4d2a5913-5bee-4ecb-8f19-5653e42acc47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.425 185478 DEBUG nova.network.neutron [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.451 185478 DEBUG oslo_concurrency.lockutils [req-3adb2621-b30d-4d5a-b25c-ae5f07d38330 req-526eb371-5d39-4463-98bd-0924bee9ef8c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.460 185478 INFO nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Took 8.01 seconds to spawn the instance on the hypervisor.
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.462 185478 DEBUG nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.542 185478 INFO nova.compute.manager [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Took 8.65 seconds to build instance.
Jan 05 14:51:11 compute-0 nova_compute[185474]: 2026-01-05 14:51:11.562 185478 DEBUG oslo_concurrency.lockutils [None req-6da4c383-a2ce-4757-b121-0721f562fcac 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:12 compute-0 podman[242052]: 2026-01-05 14:51:12.645677198 +0000 UTC m=+0.106999957 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 05 14:51:12 compute-0 podman[242051]: 2026-01-05 14:51:12.652530986 +0000 UTC m=+0.117384023 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:51:12 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 14:51:12 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.425 185478 DEBUG nova.compute.manager [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.426 185478 DEBUG oslo_concurrency.lockutils [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.427 185478 DEBUG oslo_concurrency.lockutils [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.428 185478 DEBUG oslo_concurrency.lockutils [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.428 185478 DEBUG nova.compute.manager [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] No waiting events found dispatching network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.429 185478 WARNING nova.compute.manager [req-4616ffe4-52af-44d9-878b-cf11edc5c476 req-7409600c-e8b2-4a2e-913b-11e8b63b1780 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received unexpected event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 for instance with vm_state active and task_state None.
Jan 05 14:51:13 compute-0 nova_compute[185474]: 2026-01-05 14:51:13.642 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:14 compute-0 nova_compute[185474]: 2026-01-05 14:51:14.911 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:17 compute-0 podman[242112]: 2026-01-05 14:51:17.623992723 +0000 UTC m=+0.102975235 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:51:17 compute-0 podman[242111]: 2026-01-05 14:51:17.643100701 +0000 UTC m=+0.121711043 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:51:18 compute-0 nova_compute[185474]: 2026-01-05 14:51:18.645 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:19 compute-0 nova_compute[185474]: 2026-01-05 14:51:19.914 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:21 compute-0 podman[242152]: 2026-01-05 14:51:21.626121497 +0000 UTC m=+0.108057384 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.tags=base rhel9)
Jan 05 14:51:23 compute-0 nova_compute[185474]: 2026-01-05 14:51:23.650 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:24 compute-0 nova_compute[185474]: 2026-01-05 14:51:24.918 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:28 compute-0 nova_compute[185474]: 2026-01-05 14:51:28.653 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:29 compute-0 podman[242172]: 2026-01-05 14:51:29.58413722 +0000 UTC m=+0.073162881 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 05 14:51:29 compute-0 podman[201880]: time="2026-01-05T14:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:51:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:51:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 05 14:51:29 compute-0 nova_compute[185474]: 2026-01-05 14:51:29.922 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:31 compute-0 openstack_network_exporter[205179]: ERROR   14:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:51:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:51:31 compute-0 openstack_network_exporter[205179]: ERROR   14:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:51:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:51:33 compute-0 nova_compute[185474]: 2026-01-05 14:51:33.657 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:34 compute-0 nova_compute[185474]: 2026-01-05 14:51:34.927 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:38 compute-0 podman[242193]: 2026-01-05 14:51:38.627123168 +0000 UTC m=+0.109274109 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 14:51:38 compute-0 nova_compute[185474]: 2026-01-05 14:51:38.664 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:39 compute-0 nova_compute[185474]: 2026-01-05 14:51:39.433 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:39 compute-0 nova_compute[185474]: 2026-01-05 14:51:39.434 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:39 compute-0 nova_compute[185474]: 2026-01-05 14:51:39.478 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:39 compute-0 nova_compute[185474]: 2026-01-05 14:51:39.931 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:40 compute-0 ovn_controller[97763]: 2026-01-05T14:51:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:98:05 192.168.0.34
Jan 05 14:51:40 compute-0 ovn_controller[97763]: 2026-01-05T14:51:40Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:98:05 192.168.0.34
Jan 05 14:51:40 compute-0 nova_compute[185474]: 2026-01-05 14:51:40.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:40 compute-0 nova_compute[185474]: 2026-01-05 14:51:40.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:40 compute-0 nova_compute[185474]: 2026-01-05 14:51:40.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:40 compute-0 nova_compute[185474]: 2026-01-05 14:51:40.401 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:51:40 compute-0 podman[242231]: 2026-01-05 14:51:40.71849651 +0000 UTC m=+0.196452706 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:51:40 compute-0 ovn_controller[97763]: 2026-01-05T14:51:40Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.431 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.432 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.433 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.434 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.551 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.656 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.658 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.746 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.748 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.844 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.846 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.913 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:41 compute-0 nova_compute[185474]: 2026-01-05 14:51:41.928 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.035 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.037 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.118 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.120 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.223 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.225 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.294 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.306 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.374 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.376 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.445 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.446 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.520 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.522 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:51:42 compute-0 nova_compute[185474]: 2026-01-05 14:51:42.591 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.120 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.123 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4859MB free_disk=72.37868881225586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.124 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.125 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.253 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.255 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.255 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.256 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.257 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.362 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.383 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.410 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.411 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:43 compute-0 podman[242295]: 2026-01-05 14:51:43.66501712 +0000 UTC m=+0.123056439 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 05 14:51:43 compute-0 nova_compute[185474]: 2026-01-05 14:51:43.670 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:43 compute-0 podman[242294]: 2026-01-05 14:51:43.677128215 +0000 UTC m=+0.153107240 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:51:44 compute-0 nova_compute[185474]: 2026-01-05 14:51:44.413 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:44 compute-0 nova_compute[185474]: 2026-01-05 14:51:44.415 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:51:44 compute-0 nova_compute[185474]: 2026-01-05 14:51:44.416 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:51:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:44.807 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:51:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:44.808 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:51:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:51:44.808 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:51:44 compute-0 nova_compute[185474]: 2026-01-05 14:51:44.936 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:45 compute-0 nova_compute[185474]: 2026-01-05 14:51:45.354 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:51:45 compute-0 nova_compute[185474]: 2026-01-05 14:51:45.355 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:51:45 compute-0 nova_compute[185474]: 2026-01-05 14:51:45.355 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:51:45 compute-0 nova_compute[185474]: 2026-01-05 14:51:45.356 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:51:47 compute-0 nova_compute[185474]: 2026-01-05 14:51:47.794 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:51:47 compute-0 nova_compute[185474]: 2026-01-05 14:51:47.816 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:51:47 compute-0 nova_compute[185474]: 2026-01-05 14:51:47.817 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:51:47 compute-0 nova_compute[185474]: 2026-01-05 14:51:47.818 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:47 compute-0 nova_compute[185474]: 2026-01-05 14:51:47.819 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:51:48 compute-0 podman[242335]: 2026-01-05 14:51:48.607166517 +0000 UTC m=+0.102525943 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:51:48 compute-0 podman[242336]: 2026-01-05 14:51:48.670093245 +0000 UTC m=+0.147167465 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:51:48 compute-0 nova_compute[185474]: 2026-01-05 14:51:48.673 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:49 compute-0 nova_compute[185474]: 2026-01-05 14:51:49.940 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:52 compute-0 podman[242381]: 2026-01-05 14:51:52.633978244 +0000 UTC m=+0.107881690 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, build-date=2024-09-18T21:23:30, version=9.4, config_id=kepler, release=1214.1726694543, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 14:51:53 compute-0 nova_compute[185474]: 2026-01-05 14:51:53.676 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:54 compute-0 nova_compute[185474]: 2026-01-05 14:51:54.944 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:58 compute-0 nova_compute[185474]: 2026-01-05 14:51:58.678 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:51:59 compute-0 podman[201880]: time="2026-01-05T14:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:51:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:51:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 05 14:51:59 compute-0 nova_compute[185474]: 2026-01-05 14:51:59.948 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:00 compute-0 podman[242400]: 2026-01-05 14:52:00.657846325 +0000 UTC m=+0.134927758 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 05 14:52:01 compute-0 anacron[30828]: Job `cron.weekly' started
Jan 05 14:52:01 compute-0 anacron[30828]: Job `cron.weekly' terminated
Jan 05 14:52:01 compute-0 openstack_network_exporter[205179]: ERROR   14:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:52:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:52:01 compute-0 openstack_network_exporter[205179]: ERROR   14:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:52:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:52:03 compute-0 nova_compute[185474]: 2026-01-05 14:52:03.685 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:04 compute-0 nova_compute[185474]: 2026-01-05 14:52:04.952 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:05 compute-0 sshd-session[242420]: Invalid user  from 147.182.205.88 port 59566
Jan 05 14:52:08 compute-0 nova_compute[185474]: 2026-01-05 14:52:08.686 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:08 compute-0 podman[242423]: 2026-01-05 14:52:08.805288699 +0000 UTC m=+0.092477645 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Jan 05 14:52:09 compute-0 nova_compute[185474]: 2026-01-05 14:52:09.955 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:11 compute-0 podman[242442]: 2026-01-05 14:52:11.653981008 +0000 UTC m=+0.134180207 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:52:13 compute-0 sshd-session[242420]: Connection closed by invalid user  147.182.205.88 port 59566 [preauth]
Jan 05 14:52:13 compute-0 nova_compute[185474]: 2026-01-05 14:52:13.688 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:14 compute-0 podman[242467]: 2026-01-05 14:52:14.580688 +0000 UTC m=+0.066683752 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:52:14 compute-0 podman[242468]: 2026-01-05 14:52:14.591868448 +0000 UTC m=+0.069419407 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 14:52:14 compute-0 nova_compute[185474]: 2026-01-05 14:52:14.959 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:18 compute-0 nova_compute[185474]: 2026-01-05 14:52:18.693 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:19 compute-0 podman[242510]: 2026-01-05 14:52:19.673975432 +0000 UTC m=+0.128086479 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:52:19 compute-0 podman[242508]: 2026-01-05 14:52:19.689640024 +0000 UTC m=+0.154823697 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 14:52:19 compute-0 nova_compute[185474]: 2026-01-05 14:52:19.979 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:23 compute-0 podman[242551]: 2026-01-05 14:52:23.649674407 +0000 UTC m=+0.119276566 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, container_name=kepler, io.openshift.expose-services=, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container)
Jan 05 14:52:23 compute-0 nova_compute[185474]: 2026-01-05 14:52:23.696 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:24 compute-0 nova_compute[185474]: 2026-01-05 14:52:24.983 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:28 compute-0 nova_compute[185474]: 2026-01-05 14:52:28.697 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:29 compute-0 podman[201880]: time="2026-01-05T14:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:52:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:52:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 05 14:52:29 compute-0 nova_compute[185474]: 2026-01-05 14:52:29.986 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:31 compute-0 openstack_network_exporter[205179]: ERROR   14:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:52:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:52:31 compute-0 openstack_network_exporter[205179]: ERROR   14:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:52:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:52:31 compute-0 podman[242572]: 2026-01-05 14:52:31.642440479 +0000 UTC m=+0.117181398 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Jan 05 14:52:33 compute-0 nova_compute[185474]: 2026-01-05 14:52:33.699 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:34 compute-0 nova_compute[185474]: 2026-01-05 14:52:34.990 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.752 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.752 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.752 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.760 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.764 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'name': 'vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.767 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 14:52:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:37.770 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f927dce2-97db-41ff-a7bc-a34d4e7486d4 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.496 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 05 Jan 2026 14:52:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b5b1cbb5-d53b-49f2-ac2d-56d6296cc302 x-openstack-request-id: req-b5b1cbb5-d53b-49f2-ac2d-56d6296cc302 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.496 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f927dce2-97db-41ff-a7bc-a34d4e7486d4", "name": "vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek", "status": "ACTIVE", "tenant_id": "54417029b2fb4b749e20754214013802", "user_id": "4c0cf318026a40748762c9e05cd1efe0", "metadata": {"metering.server_group": "fb98dcdd-a12e-44ca-97ca-fe43134a3faa"}, "hostId": "35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc", "image": {"id": "22e54d95-dd91-4f66-a65f-ce9984e648dc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/22e54d95-dd91-4f66-a65f-ce9984e648dc"}]}, "flavor": {"id": "afe04c80-f0ab-417e-844c-b5b05cc96b17", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/afe04c80-f0ab-417e-844c-b5b05cc96b17"}]}, "created": "2026-01-05T14:51:01Z", "updated": "2026-01-05T14:51:11Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.34", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:84:98:05"}, {"version": 4, "addr": "192.168.122.246", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:84:98:05"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f927dce2-97db-41ff-a7bc-a34d4e7486d4"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f927dce2-97db-41ff-a7bc-a34d4e7486d4"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T14:51:11.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.496 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f927dce2-97db-41ff-a7bc-a34d4e7486d4 used request id req-b5b1cbb5-d53b-49f2-ac2d-56d6296cc302 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.499 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'name': 'vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.500 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:52:38.500528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.597 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.597 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.598 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.697 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 1225483066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.698 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 12433569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.698 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 nova_compute[185474]: 2026-01-05 14:52:38.701 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.767 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 1801199740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.768 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 10969023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.769 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.771 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.771 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.771 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 601656532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.771 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 105953551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.771 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 68177111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.772 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 545412987 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.772 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 103754380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.772 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 84932339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.772 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:52:38.770307) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:52:38.773345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.773 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.774 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.774 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.774 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.774 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.774 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:52:38.776066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.813 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.814 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.814 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.845 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.845 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.845 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.870 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.871 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.871 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.875 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:52:38.873719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.880 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.886 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.891 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f927dce2-97db-41ff-a7bc-a34d4e7486d4 / tap4d2a5913-5b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.891 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.894 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:52:38.893741) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.895 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.895 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.896 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.896 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.896 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.897 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.897 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.898 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:52:38.900155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.902 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.903 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:52:38.902944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.903 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.904 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.904 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.904 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.905 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.905 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.905 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.905 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.906 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.906 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:52:38.907311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.907 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.908 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.908 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.908 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.908 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.909 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.909 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.909 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.909 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.910 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:52:38.910957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.945 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 39850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:38.981 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/cpu volume: 295100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.013 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/cpu volume: 29250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.014 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.014 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.015 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:52:39.014995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.016 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.016 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.016 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.017 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.018 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.020 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:52:39.018508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.020 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.020 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.021 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.022 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.023 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.023 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek>]
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.024 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.025 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.025 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.026 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.026 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.026 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.027 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.027 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek>]
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.028 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.028 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.028 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.028 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.028 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.029 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.029 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.030 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.030 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.030 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.031 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T14:52:39.022731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.032 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:52:39.025410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T14:52:39.027257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:52:39.028605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.032 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:52:39.031130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.033 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:52:39.035584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.036 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.036 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.037 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:52:39.038889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.039 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.040 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.040 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:52:39.041466) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.042 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.042 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes volume: 4830 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.042 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.042 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:52:39.043552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.044 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.044 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.044 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:52:39.045546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.046 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.046 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.046 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.046 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:52:39.047390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.047 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.048 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.048 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:52:39.049283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.049 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.050 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.050 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.050 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.050 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.051 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.051 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.051 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.051 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.052 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:52:39.053027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.053 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.053 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.055 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:52:39.055011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.055 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.055 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.056 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.056 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.056 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.056 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.057 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.057 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.057 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.058 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.058 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.063 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:52:39.064 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:52:39 compute-0 nova_compute[185474]: 2026-01-05 14:52:39.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:39 compute-0 podman[242593]: 2026-01-05 14:52:39.601802558 +0000 UTC m=+0.078551780 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Jan 05 14:52:39 compute-0 nova_compute[185474]: 2026-01-05 14:52:39.995 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:40 compute-0 nova_compute[185474]: 2026-01-05 14:52:40.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:40 compute-0 nova_compute[185474]: 2026-01-05 14:52:40.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:40 compute-0 nova_compute[185474]: 2026-01-05 14:52:40.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:52:41 compute-0 nova_compute[185474]: 2026-01-05 14:52:41.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:42 compute-0 nova_compute[185474]: 2026-01-05 14:52:42.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:42 compute-0 nova_compute[185474]: 2026-01-05 14:52:42.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:42 compute-0 podman[242613]: 2026-01-05 14:52:42.643061635 +0000 UTC m=+0.114816472 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 05 14:52:43 compute-0 nova_compute[185474]: 2026-01-05 14:52:43.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:43 compute-0 nova_compute[185474]: 2026-01-05 14:52:43.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:52:43 compute-0 nova_compute[185474]: 2026-01-05 14:52:43.705 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:44 compute-0 nova_compute[185474]: 2026-01-05 14:52:44.382 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:52:44 compute-0 nova_compute[185474]: 2026-01-05 14:52:44.382 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:52:44 compute-0 nova_compute[185474]: 2026-01-05 14:52:44.383 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:52:44 compute-0 podman[242641]: 2026-01-05 14:52:44.771293984 +0000 UTC m=+0.099726695 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 14:52:44 compute-0 podman[242640]: 2026-01-05 14:52:44.799246476 +0000 UTC m=+0.119762698 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:52:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:52:44.808 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:52:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:52:44.808 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:52:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:52:44.809 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:52:45 compute-0 nova_compute[185474]: 2026-01-05 14:52:45.000 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.702 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.722 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.723 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.724 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.725 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.748 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.749 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.749 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.750 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.884 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.989 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:46 compute-0 nova_compute[185474]: 2026-01-05 14:52:46.990 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.062 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.063 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.162 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.165 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.261 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.275 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.352 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.354 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.415 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.418 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.475 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.477 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.538 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.554 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.609 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.610 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.672 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.673 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.756 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.758 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:52:47 compute-0 nova_compute[185474]: 2026-01-05 14:52:47.854 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.309 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.312 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4879MB free_disk=72.37870025634766GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.313 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.314 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.428 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.430 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.430 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.431 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.432 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.504 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.528 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.531 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.532 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:52:48 compute-0 nova_compute[185474]: 2026-01-05 14:52:48.706 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:50 compute-0 nova_compute[185474]: 2026-01-05 14:52:50.004 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:50 compute-0 podman[242719]: 2026-01-05 14:52:50.635898809 +0000 UTC m=+0.101001668 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:52:50 compute-0 podman[242718]: 2026-01-05 14:52:50.643079656 +0000 UTC m=+0.124935414 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi)
Jan 05 14:52:53 compute-0 nova_compute[185474]: 2026-01-05 14:52:53.711 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:54 compute-0 podman[242759]: 2026-01-05 14:52:54.624335275 +0000 UTC m=+0.106935790 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 14:52:55 compute-0 nova_compute[185474]: 2026-01-05 14:52:55.007 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:58 compute-0 nova_compute[185474]: 2026-01-05 14:52:58.711 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:52:59 compute-0 podman[201880]: time="2026-01-05T14:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:52:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:52:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 05 14:53:00 compute-0 nova_compute[185474]: 2026-01-05 14:53:00.010 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:01 compute-0 openstack_network_exporter[205179]: ERROR   14:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:53:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:53:01 compute-0 openstack_network_exporter[205179]: ERROR   14:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:53:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:53:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:01.605 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:53:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:01.606 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:53:01 compute-0 nova_compute[185474]: 2026-01-05 14:53:01.614 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:02 compute-0 podman[242778]: 2026-01-05 14:53:02.623550067 +0000 UTC m=+0.103246399 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute)
Jan 05 14:53:03 compute-0 nova_compute[185474]: 2026-01-05 14:53:03.716 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:05 compute-0 nova_compute[185474]: 2026-01-05 14:53:05.014 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:08.610 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:08 compute-0 nova_compute[185474]: 2026-01-05 14:53:08.719 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:09 compute-0 nova_compute[185474]: 2026-01-05 14:53:09.950 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:09 compute-0 nova_compute[185474]: 2026-01-05 14:53:09.952 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:09 compute-0 nova_compute[185474]: 2026-01-05 14:53:09.974 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.019 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.081 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.082 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.092 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.093 185478 INFO nova.compute.claims [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Claim successful on node compute-0.ctlplane.example.com
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.315 185478 DEBUG nova.compute.provider_tree [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.333 185478 DEBUG nova.scheduler.client.report [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.370 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.372 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.438 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.440 185478 DEBUG nova.network.neutron [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.490 185478 INFO nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.539 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 14:53:10 compute-0 podman[242798]: 2026-01-05 14:53:10.616788436 +0000 UTC m=+0.102109627 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.667 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.670 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.672 185478 INFO nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Creating image(s)
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.673 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.674 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.676 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.694 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.757 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.759 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.760 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.775 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.843 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.845 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.906 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4,backing_fmt=raw /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk 1073741824" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.921 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bb725f888e0151a5f32c575893ef36b5ca6478d4" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:10 compute-0 nova_compute[185474]: 2026-01-05 14:53:10.923 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.019 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb725f888e0151a5f32c575893ef36b5ca6478d4 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.022 185478 DEBUG nova.virt.disk.api [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking if we can resize image /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.024 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.128 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.131 185478 DEBUG nova.virt.disk.api [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Cannot resize image /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.132 185478 DEBUG nova.objects.instance [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'migration_context' on Instance uuid bf9485c0-8711-436a-aad0-658ecba71329 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.158 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.160 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.163 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.190 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.285 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.286 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.287 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.307 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.389 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.393 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.452 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.454 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.454 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.548 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.550 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.550 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Ensure instance console log exists: /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.551 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.552 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.552 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.557 185478 DEBUG nova.network.neutron [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Successfully updated port: adeb7ded-97b9-4df8-bd1a-dbc14421a73f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.578 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.579 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.579 185478 DEBUG nova.network.neutron [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.663 185478 DEBUG nova.compute.manager [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-changed-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.664 185478 DEBUG nova.compute.manager [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Refreshing instance network info cache due to event network-changed-adeb7ded-97b9-4df8-bd1a-dbc14421a73f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.665 185478 DEBUG oslo_concurrency.lockutils [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:53:11 compute-0 nova_compute[185474]: 2026-01-05 14:53:11.799 185478 DEBUG nova.network.neutron [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 14:53:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 14:53:13 compute-0 podman[242848]: 2026-01-05 14:53:13.314038744 +0000 UTC m=+0.167228901 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:53:13 compute-0 nova_compute[185474]: 2026-01-05 14:53:13.721 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:14 compute-0 sshd-session[242873]: Invalid user solv from 165.22.168.95 port 48376
Jan 05 14:53:14 compute-0 sshd-session[242873]: Connection closed by invalid user solv 165.22.168.95 port 48376 [preauth]
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.024 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.386 185478 DEBUG nova.network.neutron [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.409 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.409 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance network_info: |[{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.410 185478 DEBUG oslo_concurrency.lockutils [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.411 185478 DEBUG nova.network.neutron [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Refreshing network info cache for port adeb7ded-97b9-4df8-bd1a-dbc14421a73f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.417 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Start _get_guest_xml network_info=[{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.430 185478 WARNING nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.445 185478 DEBUG nova.virt.libvirt.host [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.446 185478 DEBUG nova.virt.libvirt.host [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.451 185478 DEBUG nova.virt.libvirt.host [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.452 185478 DEBUG nova.virt.libvirt.host [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.453 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.453 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T14:44:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='afe04c80-f0ab-417e-844c-b5b05cc96b17',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:44:12Z,direct_url=<?>,disk_format='qcow2',id=22e54d95-dd91-4f66-a65f-ce9984e648dc,min_disk=0,min_ram=0,name='cirros',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:44:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.454 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.455 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.456 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.456 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.457 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.458 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.458 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.459 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.460 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.460 185478 DEBUG nova.virt.hardware [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.467 185478 DEBUG nova.virt.libvirt.vif [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',id=4,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-yoo0u7c7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:53:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 14:53:15 compute-0 nova_compute[185474]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bf9485c0-8711-436a-aad0-658ecba71329,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.468 185478 DEBUG nova.network.os_vif_util [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.469 185478 DEBUG nova.network.os_vif_util [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.471 185478 DEBUG nova.objects.instance [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'pci_devices' on Instance uuid bf9485c0-8711-436a-aad0-658ecba71329 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.487 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] End _get_guest_xml xml=<domain type="kvm">
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <uuid>bf9485c0-8711-436a-aad0-658ecba71329</uuid>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <name>instance-00000004</name>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <memory>524288</memory>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <metadata>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:name>vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj</nova:name>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 14:53:15</nova:creationTime>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:flavor name="m1.small">
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:memory>512</nova:memory>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:user uuid="4c0cf318026a40748762c9e05cd1efe0">admin</nova:user>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:project uuid="54417029b2fb4b749e20754214013802">admin</nova:project>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="22e54d95-dd91-4f66-a65f-ce9984e648dc"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         <nova:port uuid="adeb7ded-97b9-4df8-bd1a-dbc14421a73f">
Jan 05 14:53:15 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="192.168.0.72" ipVersion="4"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </metadata>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <system>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="serial">bf9485c0-8711-436a-aad0-658ecba71329</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="uuid">bf9485c0-8711-436a-aad0-658ecba71329</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </system>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <os>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </os>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <features>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <apic/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </features>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </clock>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <target dev="vdb" bus="virtio"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.config"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:ef:7d:54"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <target dev="tapadeb7ded-97"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </interface>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/console.log" append="off"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </serial>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <video>
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </video>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 14:53:15 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 14:53:15 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 14:53:15 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:53:15 compute-0 nova_compute[185474]: </domain>
Jan 05 14:53:15 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.487 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Preparing to wait for external event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.487 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.488 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.488 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.489 185478 DEBUG nova.virt.libvirt.vif [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T14:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',id=4,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-yoo0u7c7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T14:53:10Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 05 14:53:15 compute-0 nova_compute[185474]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bf9485c0-8711-436a-aad0-658ecba71329,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.489 185478 DEBUG nova.network.os_vif_util [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.490 185478 DEBUG nova.network.os_vif_util [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.490 185478 DEBUG os_vif [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.491 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.492 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.492 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.496 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.496 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapadeb7ded-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.497 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapadeb7ded-97, col_values=(('external_ids', {'iface-id': 'adeb7ded-97b9-4df8-bd1a-dbc14421a73f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:7d:54', 'vm-uuid': 'bf9485c0-8711-436a-aad0-658ecba71329'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.500 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.503 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:53:15 compute-0 NetworkManager[56139]: <info>  [1767624795.5036] manager: (tapadeb7ded-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.516 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.518 185478 INFO os_vif [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97')
Jan 05 14:53:15 compute-0 podman[242877]: 2026-01-05 14:53:15.63856537 +0000 UTC m=+0.116802030 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 14:53:15 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:53:15.467 185478 DEBUG nova.virt.libvirt.vif [None req-b7923e8b-e2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:53:15 compute-0 podman[242875]: 2026-01-05 14:53:15.656415289 +0000 UTC m=+0.125979422 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.687 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.688 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.688 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.688 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No VIF found with MAC fa:16:3e:ef:7d:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 14:53:15 compute-0 nova_compute[185474]: 2026-01-05 14:53:15.689 185478 INFO nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Using config drive
Jan 05 14:53:15 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:53:15.489 185478 DEBUG nova.virt.libvirt.vif [None req-b7923e8b-e2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:53:16 compute-0 nova_compute[185474]: 2026-01-05 14:53:16.773 185478 INFO nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Creating config drive at /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.config
Jan 05 14:53:16 compute-0 nova_compute[185474]: 2026-01-05 14:53:16.786 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpypfq4_6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:16 compute-0 nova_compute[185474]: 2026-01-05 14:53:16.919 185478 DEBUG oslo_concurrency.processutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpypfq4_6b" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:17 compute-0 kernel: tapadeb7ded-97: entered promiscuous mode
Jan 05 14:53:17 compute-0 NetworkManager[56139]: <info>  [1767624797.0270] manager: (tapadeb7ded-97): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 05 14:53:17 compute-0 ovn_controller[97763]: 2026-01-05T14:53:17Z|00045|binding|INFO|Claiming lport adeb7ded-97b9-4df8-bd1a-dbc14421a73f for this chassis.
Jan 05 14:53:17 compute-0 ovn_controller[97763]: 2026-01-05T14:53:17Z|00046|binding|INFO|adeb7ded-97b9-4df8-bd1a-dbc14421a73f: Claiming fa:16:3e:ef:7d:54 192.168.0.72
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.029 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.039 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:7d:54 192.168.0.72'], port_security=['fa:16:3e:ef:7d:54 192.168.0.72'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-acrgehsdshfx-zaln7rhtkf7p-port-vy562cz6xjpw', 'neutron:cidrs': '192.168.0.72/24', 'neutron:device_id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-acrgehsdshfx-zaln7rhtkf7p-port-vy562cz6xjpw', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=adeb7ded-97b9-4df8-bd1a-dbc14421a73f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.042 107222 INFO neutron.agent.ovn.metadata.agent [-] Port adeb7ded-97b9-4df8-bd1a-dbc14421a73f in datapath 905a1599-2980-4b24-9705-76e3c8a469ea bound to our chassis
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.043 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:53:17 compute-0 ovn_controller[97763]: 2026-01-05T14:53:17Z|00047|binding|INFO|Setting lport adeb7ded-97b9-4df8-bd1a-dbc14421a73f ovn-installed in OVS
Jan 05 14:53:17 compute-0 ovn_controller[97763]: 2026-01-05T14:53:17Z|00048|binding|INFO|Setting lport adeb7ded-97b9-4df8-bd1a-dbc14421a73f up in Southbound
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.061 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.070 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.073 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[0a248e0b-77d0-4b34-9cf0-f82c15856683]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 systemd-udevd[242937]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 14:53:17 compute-0 NetworkManager[56139]: <info>  [1767624797.0949] device (tapadeb7ded-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 14:53:17 compute-0 NetworkManager[56139]: <info>  [1767624797.0965] device (tapadeb7ded-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 14:53:17 compute-0 systemd-machined[156786]: New machine qemu-4-instance-00000004.
Jan 05 14:53:17 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.123 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e0518b-9fde-4a44-a651-b9d60ebc6ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.127 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[64ce03af-9e48-4877-a544-47af84a089be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.166 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[e09f340f-fee8-4c9e-96d9-6436b7c9ce07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.194 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c6970a60-60be-48aa-af88-75c562cb9a14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 41299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242949, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.220 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca3b867-7761-4e10-9a7e-75c8e7286e30]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242953, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242953, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.222 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.224 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.226 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.227 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.227 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.228 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:53:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:17.228 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.524 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624797.5229003, bf9485c0-8711-436a-aad0-658ecba71329 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.525 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] VM Started (Lifecycle Event)
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.574 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.584 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624797.5231252, bf9485c0-8711-436a-aad0-658ecba71329 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.585 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] VM Paused (Lifecycle Event)
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.614 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.621 185478 DEBUG nova.compute.manager [req-179080d7-e8d4-47e9-bf03-8aebd68612b0 req-1b7c189b-bd89-42b6-a2ed-9765399d389a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.622 185478 DEBUG oslo_concurrency.lockutils [req-179080d7-e8d4-47e9-bf03-8aebd68612b0 req-1b7c189b-bd89-42b6-a2ed-9765399d389a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.622 185478 DEBUG oslo_concurrency.lockutils [req-179080d7-e8d4-47e9-bf03-8aebd68612b0 req-1b7c189b-bd89-42b6-a2ed-9765399d389a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.623 185478 DEBUG oslo_concurrency.lockutils [req-179080d7-e8d4-47e9-bf03-8aebd68612b0 req-1b7c189b-bd89-42b6-a2ed-9765399d389a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.623 185478 DEBUG nova.compute.manager [req-179080d7-e8d4-47e9-bf03-8aebd68612b0 req-1b7c189b-bd89-42b6-a2ed-9765399d389a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Processing event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.629 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.634 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.639 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.649 185478 INFO nova.virt.libvirt.driver [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance spawned successfully.
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.650 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.658 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.659 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767624797.6368911, bf9485c0-8711-436a-aad0-658ecba71329 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.659 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] VM Resumed (Lifecycle Event)
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.693 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.696 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.697 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.698 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.699 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.699 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.700 185478 DEBUG nova.virt.libvirt.driver [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.709 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.740 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.762 185478 INFO nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Took 7.09 seconds to spawn the instance on the hypervisor.
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.762 185478 DEBUG nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.848 185478 INFO nova.compute.manager [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Took 7.80 seconds to build instance.
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.875 185478 DEBUG oslo_concurrency.lockutils [None req-b7923e8b-e264-4004-b50c-b2d79cebe041 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.887 185478 DEBUG nova.network.neutron [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updated VIF entry in instance network info cache for port adeb7ded-97b9-4df8-bd1a-dbc14421a73f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.888 185478 DEBUG nova.network.neutron [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:53:17 compute-0 nova_compute[185474]: 2026-01-05 14:53:17.909 185478 DEBUG oslo_concurrency.lockutils [req-6bab6003-ee32-4f0a-9318-3c4c4abfed62 req-e7e9b21c-2b9c-4208-9e4f-33ea4bb102a7 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:53:18 compute-0 nova_compute[185474]: 2026-01-05 14:53:18.724 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:18 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 14:53:18 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.738 185478 DEBUG nova.compute.manager [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.739 185478 DEBUG oslo_concurrency.lockutils [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.739 185478 DEBUG oslo_concurrency.lockutils [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.739 185478 DEBUG oslo_concurrency.lockutils [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.740 185478 DEBUG nova.compute.manager [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] No waiting events found dispatching network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:53:19 compute-0 nova_compute[185474]: 2026-01-05 14:53:19.740 185478 WARNING nova.compute.manager [req-5d6eddd5-386b-4e0a-b3e2-4ce45db21940 req-7e379d07-56a8-41d9-8cdc-4dc174d8dfb5 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received unexpected event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f for instance with vm_state active and task_state None.
Jan 05 14:53:20 compute-0 nova_compute[185474]: 2026-01-05 14:53:20.501 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:21 compute-0 podman[242982]: 2026-01-05 14:53:21.589442637 +0000 UTC m=+0.081825702 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:53:21 compute-0 podman[242983]: 2026-01-05 14:53:21.617402893 +0000 UTC m=+0.097161551 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:53:23 compute-0 nova_compute[185474]: 2026-01-05 14:53:23.727 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:25 compute-0 nova_compute[185474]: 2026-01-05 14:53:25.505 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:25 compute-0 podman[243022]: 2026-01-05 14:53:25.618743083 +0000 UTC m=+0.099918708 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:53:28 compute-0 nova_compute[185474]: 2026-01-05 14:53:28.730 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:29 compute-0 podman[201880]: time="2026-01-05T14:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:53:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:53:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 05 14:53:30 compute-0 nova_compute[185474]: 2026-01-05 14:53:30.509 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:31 compute-0 openstack_network_exporter[205179]: ERROR   14:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:53:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:53:31 compute-0 openstack_network_exporter[205179]: ERROR   14:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:53:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:53:33 compute-0 podman[243041]: 2026-01-05 14:53:33.631276681 +0000 UTC m=+0.114549719 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 14:53:33 compute-0 nova_compute[185474]: 2026-01-05 14:53:33.732 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:35 compute-0 nova_compute[185474]: 2026-01-05 14:53:35.512 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:38 compute-0 nova_compute[185474]: 2026-01-05 14:53:38.734 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:40 compute-0 nova_compute[185474]: 2026-01-05 14:53:40.517 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:41 compute-0 nova_compute[185474]: 2026-01-05 14:53:41.207 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:41 compute-0 nova_compute[185474]: 2026-01-05 14:53:41.208 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:41 compute-0 nova_compute[185474]: 2026-01-05 14:53:41.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:41 compute-0 nova_compute[185474]: 2026-01-05 14:53:41.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:53:41 compute-0 podman[243061]: 2026-01-05 14:53:41.655960701 +0000 UTC m=+0.134858495 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Jan 05 14:53:42 compute-0 nova_compute[185474]: 2026-01-05 14:53:42.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:42 compute-0 nova_compute[185474]: 2026-01-05 14:53:42.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.443 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.451 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.452 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.452 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.620 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:43 compute-0 podman[243081]: 2026-01-05 14:53:43.660831561 +0000 UTC m=+0.153250309 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.725 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.728 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.752 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.815 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.816 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.904 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:43 compute-0 nova_compute[185474]: 2026-01-05 14:53:43.905 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.013 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.032 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.128 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.129 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.224 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.226 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.324 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.326 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.393 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.409 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.477 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.479 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.553 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.556 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.645 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.648 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.730 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.746 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:44.808 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:44.809 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:53:44.809 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.826 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.829 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.920 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:44 compute-0 nova_compute[185474]: 2026-01-05 14:53:44.924 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.030 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.032 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.097 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.523 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.561 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.562 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4719MB free_disk=72.3776741027832GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.563 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.563 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.695 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.696 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.696 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.696 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.697 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.697 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.800 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.813 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.837 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:53:45 compute-0 nova_compute[185474]: 2026-01-05 14:53:45.837 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:53:46 compute-0 podman[243165]: 2026-01-05 14:53:46.575560656 +0000 UTC m=+0.071434428 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:53:46 compute-0 podman[243166]: 2026-01-05 14:53:46.597792404 +0000 UTC m=+0.080831174 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 14:53:46 compute-0 ovn_controller[97763]: 2026-01-05T14:53:46Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:7d:54 192.168.0.72
Jan 05 14:53:46 compute-0 nova_compute[185474]: 2026-01-05 14:53:46.832 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:46 compute-0 ovn_controller[97763]: 2026-01-05T14:53:46Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:7d:54 192.168.0.72
Jan 05 14:53:46 compute-0 nova_compute[185474]: 2026-01-05 14:53:46.876 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:46 compute-0 nova_compute[185474]: 2026-01-05 14:53:46.876 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:53:47 compute-0 ovn_controller[97763]: 2026-01-05T14:53:47Z|00049|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 05 14:53:47 compute-0 nova_compute[185474]: 2026-01-05 14:53:47.389 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:53:47 compute-0 nova_compute[185474]: 2026-01-05 14:53:47.390 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:53:47 compute-0 nova_compute[185474]: 2026-01-05 14:53:47.390 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.743 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.826 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.870 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.870 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.871 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:48 compute-0 nova_compute[185474]: 2026-01-05 14:53:48.872 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:53:50 compute-0 nova_compute[185474]: 2026-01-05 14:53:50.526 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:52 compute-0 podman[243213]: 2026-01-05 14:53:52.687175982 +0000 UTC m=+0.161749052 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:53:52 compute-0 podman[243215]: 2026-01-05 14:53:52.692659372 +0000 UTC m=+0.155555551 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:53:53 compute-0 nova_compute[185474]: 2026-01-05 14:53:53.746 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:55 compute-0 nova_compute[185474]: 2026-01-05 14:53:55.531 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:56 compute-0 podman[243256]: 2026-01-05 14:53:56.721639186 +0000 UTC m=+0.190205201 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, build-date=2024-09-18T21:23:30, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0)
Jan 05 14:53:58 compute-0 nova_compute[185474]: 2026-01-05 14:53:58.750 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:53:59 compute-0 podman[201880]: time="2026-01-05T14:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:53:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:53:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 05 14:54:00 compute-0 nova_compute[185474]: 2026-01-05 14:54:00.535 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:01 compute-0 openstack_network_exporter[205179]: ERROR   14:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:54:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:54:01 compute-0 openstack_network_exporter[205179]: ERROR   14:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:54:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:54:03 compute-0 nova_compute[185474]: 2026-01-05 14:54:03.754 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:04 compute-0 podman[243275]: 2026-01-05 14:54:04.652781003 +0000 UTC m=+0.120873942 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 14:54:05 compute-0 nova_compute[185474]: 2026-01-05 14:54:05.540 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:08 compute-0 nova_compute[185474]: 2026-01-05 14:54:08.757 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:10 compute-0 nova_compute[185474]: 2026-01-05 14:54:10.544 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:12 compute-0 podman[243296]: 2026-01-05 14:54:12.662241758 +0000 UTC m=+0.144573581 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 14:54:13 compute-0 nova_compute[185474]: 2026-01-05 14:54:13.760 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:14 compute-0 podman[243317]: 2026-01-05 14:54:14.74443707 +0000 UTC m=+0.214877097 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 14:54:15 compute-0 nova_compute[185474]: 2026-01-05 14:54:15.547 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:17 compute-0 podman[243344]: 2026-01-05 14:54:17.599798871 +0000 UTC m=+0.080887037 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:54:17 compute-0 podman[243343]: 2026-01-05 14:54:17.614915686 +0000 UTC m=+0.098170601 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:54:18 compute-0 nova_compute[185474]: 2026-01-05 14:54:18.764 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:20 compute-0 nova_compute[185474]: 2026-01-05 14:54:20.551 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:23 compute-0 podman[243386]: 2026-01-05 14:54:23.61296861 +0000 UTC m=+0.085291548 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:54:23 compute-0 podman[243385]: 2026-01-05 14:54:23.617119823 +0000 UTC m=+0.103312531 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 14:54:23 compute-0 nova_compute[185474]: 2026-01-05 14:54:23.767 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:25 compute-0 nova_compute[185474]: 2026-01-05 14:54:25.559 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:27 compute-0 podman[243429]: 2026-01-05 14:54:27.59229857 +0000 UTC m=+0.086840179 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 14:54:28 compute-0 nova_compute[185474]: 2026-01-05 14:54:28.771 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:29 compute-0 podman[201880]: time="2026-01-05T14:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:54:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:54:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 05 14:54:30 compute-0 nova_compute[185474]: 2026-01-05 14:54:30.564 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:31 compute-0 openstack_network_exporter[205179]: ERROR   14:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:54:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:54:31 compute-0 openstack_network_exporter[205179]: ERROR   14:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:54:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:54:33 compute-0 nova_compute[185474]: 2026-01-05 14:54:33.774 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:35 compute-0 nova_compute[185474]: 2026-01-05 14:54:35.568 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:35 compute-0 podman[243449]: 2026-01-05 14:54:35.673395583 +0000 UTC m=+0.135600906 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.752 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.752 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.753 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4fb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.759 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bf9485c0-8711-436a-aad0-658ecba71329 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 14:54:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:37.760 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bf9485c0-8711-436a-aad0-658ecba71329 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.576 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 05 Jan 2026 14:54:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a26739ef-03ee-4102-9fd6-300564ef83b1 x-openstack-request-id: req-a26739ef-03ee-4102-9fd6-300564ef83b1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.576 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bf9485c0-8711-436a-aad0-658ecba71329", "name": "vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj", "status": "ACTIVE", "tenant_id": "54417029b2fb4b749e20754214013802", "user_id": "4c0cf318026a40748762c9e05cd1efe0", "metadata": {"metering.server_group": "fb98dcdd-a12e-44ca-97ca-fe43134a3faa"}, "hostId": "35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc", "image": {"id": "22e54d95-dd91-4f66-a65f-ce9984e648dc", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/22e54d95-dd91-4f66-a65f-ce9984e648dc"}]}, "flavor": {"id": "afe04c80-f0ab-417e-844c-b5b05cc96b17", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/afe04c80-f0ab-417e-844c-b5b05cc96b17"}]}, "created": "2026-01-05T14:53:07Z", "updated": "2026-01-05T14:53:17Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.72", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ef:7d:54"}, {"version": 4, "addr": "192.168.122.227", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ef:7d:54"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bf9485c0-8711-436a-aad0-658ecba71329"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bf9485c0-8711-436a-aad0-658ecba71329"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T14:53:17.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.576 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bf9485c0-8711-436a-aad0-658ecba71329 used request id req-a26739ef-03ee-4102-9fd6-300564ef83b1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.578 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'name': 'vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.585 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.591 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'name': 'vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.596 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'name': 'vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:54:38.598119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.740 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 1385624795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.741 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 14233900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.741 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 nova_compute[185474]: 2026-01-05 14:54:38.777 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.840 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.841 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.842 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.957 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 1228730185 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.958 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 12433569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:38.958 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.083 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 1801199740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.084 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 10969023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.085 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.150 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 464426220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.151 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 74874753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:54:39.150724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.155 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 83046078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.155 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.155 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.156 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.156 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 601656532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.156 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 105953551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.156 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 68177111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.156 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 545412987 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.157 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 103754380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.157 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 84932339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.158 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.159 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.160 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.160 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.160 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:54:39.158326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:54:39.161643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.239 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.239 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.240 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.322 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.322 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.323 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:54:39.325434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.332 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bf9485c0-8711-436a-aad0-658ecba71329 / tapadeb7ded-97 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.332 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.339 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.345 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.352 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.355 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:54:39.355066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.356 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.356 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.357 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.357 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.357 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.358 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.358 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.359 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.359 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.360 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.360 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:54:39.362724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.365 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.366 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.366 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.367 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.367 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.368 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.368 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:54:39.365696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.369 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.370 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.372 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.372 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.373 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.377 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:54:39.376881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.378 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.379 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.379 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.380 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.380 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.381 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.382 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.383 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.383 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.384 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.384 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.386 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:54:39.387833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.432 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/cpu volume: 29350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.475 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 41810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.516 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/cpu volume: 328800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.549 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/cpu volume: 31170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.551 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.552 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:54:39.554123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.554 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.556 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.557 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.558 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.560 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.562 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:54:39.562322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.563 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.563 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.564 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.566 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.567 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T14:54:39.566463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.567 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.567 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj>]
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.568 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:54:39.569490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.573 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T14:54:39.572619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.573 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj>]
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.575 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.576 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.577 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:54:39.575696) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.578 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:54:39.580491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.581 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.581 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.582 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.582 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:54:39.584293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.584 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.585 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.585 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.586 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.588 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.588 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.589 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.589 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:54:39.587988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.592 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes volume: 2188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.593 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:54:39.592046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.593 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes volume: 7502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.594 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:54:39.596124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.596 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.597 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.597 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.598 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.599 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.600 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.600 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.601 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.601 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:54:39.599823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.602 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.603 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:54:39.603649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.604 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.604 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/memory.usage volume: 48.9765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.605 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.606 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:54:39.607172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.607 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.608 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.608 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.609 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.609 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.609 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.610 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.610 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.611 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.611 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.612 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.612 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.614 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.615 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.616 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.616 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:54:39.614663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.619 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:54:39.618987) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.620 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.620 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.621 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.621 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.622 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.622 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.622 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.623 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.624 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.624 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.625 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.627 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.631 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:54:39.632 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:54:40 compute-0 nova_compute[185474]: 2026-01-05 14:54:40.573 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:41 compute-0 nova_compute[185474]: 2026-01-05 14:54:41.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:41 compute-0 nova_compute[185474]: 2026-01-05 14:54:41.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:41 compute-0 nova_compute[185474]: 2026-01-05 14:54:41.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:41 compute-0 nova_compute[185474]: 2026-01-05 14:54:41.401 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:54:42 compute-0 nova_compute[185474]: 2026-01-05 14:54:42.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:43 compute-0 nova_compute[185474]: 2026-01-05 14:54:43.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:43 compute-0 podman[243469]: 2026-01-05 14:54:43.63444205 +0000 UTC m=+0.114839606 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, config_id=openstack_network_exporter, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc.)
Jan 05 14:54:43 compute-0 nova_compute[185474]: 2026-01-05 14:54:43.780 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.425 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.426 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.427 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.428 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:54:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:54:44.810 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:54:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:54:44.810 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:54:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:54:44.811 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:54:44 compute-0 nova_compute[185474]: 2026-01-05 14:54:44.938 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.015 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.019 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.093 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.095 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.164 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.165 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.247 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.255 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.344 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.346 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.409 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.411 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.475 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.476 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.577 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.584 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.593 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 podman[243510]: 2026-01-05 14:54:45.654995555 +0000 UTC m=+0.138201506 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.672 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.675 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.769 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.770 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.878 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.880 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.983 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:45 compute-0 nova_compute[185474]: 2026-01-05 14:54:45.994 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.084 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.086 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.149 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.150 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.251 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.252 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.344 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.888 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.891 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4607MB free_disk=72.35612106323242GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.891 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:54:46 compute-0 nova_compute[185474]: 2026-01-05 14:54:46.892 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.017 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.018 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.018 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.018 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.018 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.018 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.183 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.202 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.204 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:54:47 compute-0 nova_compute[185474]: 2026-01-05 14:54:47.204 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:54:48 compute-0 podman[243561]: 2026-01-05 14:54:48.636066408 +0000 UTC m=+0.110301832 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:54:48 compute-0 podman[243562]: 2026-01-05 14:54:48.670552672 +0000 UTC m=+0.141674851 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 14:54:48 compute-0 nova_compute[185474]: 2026-01-05 14:54:48.787 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.204 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.205 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.205 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.480 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.481 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.481 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:54:49 compute-0 nova_compute[185474]: 2026-01-05 14:54:49.481 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:54:50 compute-0 nova_compute[185474]: 2026-01-05 14:54:50.586 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:52 compute-0 nova_compute[185474]: 2026-01-05 14:54:52.433 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:54:52 compute-0 nova_compute[185474]: 2026-01-05 14:54:52.448 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:54:52 compute-0 nova_compute[185474]: 2026-01-05 14:54:52.448 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:54:52 compute-0 nova_compute[185474]: 2026-01-05 14:54:52.449 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:52 compute-0 nova_compute[185474]: 2026-01-05 14:54:52.449 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:54:53 compute-0 nova_compute[185474]: 2026-01-05 14:54:53.790 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:54 compute-0 podman[243601]: 2026-01-05 14:54:54.654453915 +0000 UTC m=+0.132074358 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 14:54:54 compute-0 podman[243602]: 2026-01-05 14:54:54.671514292 +0000 UTC m=+0.142954826 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:54:55 compute-0 nova_compute[185474]: 2026-01-05 14:54:55.589 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:58 compute-0 podman[243641]: 2026-01-05 14:54:58.698623076 +0000 UTC m=+0.168484024 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release-0.7.12=, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler)
Jan 05 14:54:58 compute-0 nova_compute[185474]: 2026-01-05 14:54:58.794 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:54:59 compute-0 podman[201880]: time="2026-01-05T14:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:54:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:54:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4379 "" "Go-http-client/1.1"
Jan 05 14:55:00 compute-0 nova_compute[185474]: 2026-01-05 14:55:00.593 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:01 compute-0 openstack_network_exporter[205179]: ERROR   14:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:55:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:55:01 compute-0 openstack_network_exporter[205179]: ERROR   14:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:55:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:55:03 compute-0 nova_compute[185474]: 2026-01-05 14:55:03.798 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:05 compute-0 nova_compute[185474]: 2026-01-05 14:55:05.598 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:06 compute-0 podman[243660]: 2026-01-05 14:55:06.616243883 +0000 UTC m=+0.103272468 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4)
Jan 05 14:55:08 compute-0 nova_compute[185474]: 2026-01-05 14:55:08.801 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:10 compute-0 nova_compute[185474]: 2026-01-05 14:55:10.602 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:13 compute-0 nova_compute[185474]: 2026-01-05 14:55:13.804 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:14 compute-0 podman[243680]: 2026-01-05 14:55:14.634316493 +0000 UTC m=+0.103646448 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 14:55:15 compute-0 nova_compute[185474]: 2026-01-05 14:55:15.606 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:16 compute-0 podman[243700]: 2026-01-05 14:55:16.681344722 +0000 UTC m=+0.153789106 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 05 14:55:18 compute-0 nova_compute[185474]: 2026-01-05 14:55:18.806 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 14:55:19 compute-0 podman[243728]: 2026-01-05 14:55:19.262244553 +0000 UTC m=+0.096183417 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:55:19 compute-0 podman[243729]: 2026-01-05 14:55:19.28063197 +0000 UTC m=+0.109793764 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 14:55:20 compute-0 nova_compute[185474]: 2026-01-05 14:55:20.610 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:23 compute-0 nova_compute[185474]: 2026-01-05 14:55:23.809 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:25 compute-0 nova_compute[185474]: 2026-01-05 14:55:25.615 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:25 compute-0 podman[243775]: 2026-01-05 14:55:25.632934402 +0000 UTC m=+0.102222630 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:55:25 compute-0 podman[243774]: 2026-01-05 14:55:25.657941129 +0000 UTC m=+0.128880271 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 14:55:28 compute-0 nova_compute[185474]: 2026-01-05 14:55:28.813 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:29 compute-0 podman[243818]: 2026-01-05 14:55:29.643157432 +0000 UTC m=+0.118184172 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 05 14:55:29 compute-0 podman[201880]: time="2026-01-05T14:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:55:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:55:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4380 "" "Go-http-client/1.1"
Jan 05 14:55:30 compute-0 nova_compute[185474]: 2026-01-05 14:55:30.620 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:31 compute-0 openstack_network_exporter[205179]: ERROR   14:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:55:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:55:31 compute-0 openstack_network_exporter[205179]: ERROR   14:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:55:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:55:33 compute-0 nova_compute[185474]: 2026-01-05 14:55:33.815 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:35 compute-0 nova_compute[185474]: 2026-01-05 14:55:35.625 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:37 compute-0 podman[243836]: 2026-01-05 14:55:37.597544598 +0000 UTC m=+0.084960742 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 05 14:55:38 compute-0 nova_compute[185474]: 2026-01-05 14:55:38.818 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:40 compute-0 nova_compute[185474]: 2026-01-05 14:55:40.630 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:41 compute-0 nova_compute[185474]: 2026-01-05 14:55:41.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:42 compute-0 nova_compute[185474]: 2026-01-05 14:55:42.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:42 compute-0 nova_compute[185474]: 2026-01-05 14:55:42.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:42 compute-0 nova_compute[185474]: 2026-01-05 14:55:42.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:55:43 compute-0 nova_compute[185474]: 2026-01-05 14:55:43.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:43 compute-0 nova_compute[185474]: 2026-01-05 14:55:43.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:43 compute-0 nova_compute[185474]: 2026-01-05 14:55:43.820 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:44 compute-0 nova_compute[185474]: 2026-01-05 14:55:44.630 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:44 compute-0 podman[243857]: 2026-01-05 14:55:44.798488565 +0000 UTC m=+0.092972479 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 14:55:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:55:44.811 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:55:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:55:44.812 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:55:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:55:44.813 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:55:44 compute-0 nova_compute[185474]: 2026-01-05 14:55:44.969 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.077 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.078 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.079 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.079 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.206 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.277 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.279 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.341 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.343 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.403 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.405 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.468 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.477 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.580 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.581 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.634 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.679 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.680 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.775 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.776 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.886 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.906 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.975 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:45 compute-0 nova_compute[185474]: 2026-01-05 14:55:45.978 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.048 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.050 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.116 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.118 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.211 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.235 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.307 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.309 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.369 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.370 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.471 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.472 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.568 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.990 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.993 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=72.35612106323242GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.994 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:55:46 compute-0 nova_compute[185474]: 2026-01-05 14:55:46.995 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.191 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.191 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.192 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.192 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.192 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.193 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.270 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.348 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.349 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.368 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.402 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.535 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.553 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.554 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.555 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:55:47 compute-0 podman[243926]: 2026-01-05 14:55:47.620823974 +0000 UTC m=+0.117999797 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.985 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:47 compute-0 nova_compute[185474]: 2026-01-05 14:55:47.985 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:55:48 compute-0 nova_compute[185474]: 2026-01-05 14:55:48.553 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:55:48 compute-0 nova_compute[185474]: 2026-01-05 14:55:48.554 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:55:48 compute-0 nova_compute[185474]: 2026-01-05 14:55:48.554 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:55:48 compute-0 nova_compute[185474]: 2026-01-05 14:55:48.824 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:49 compute-0 podman[243951]: 2026-01-05 14:55:49.645262442 +0000 UTC m=+0.112661052 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 14:55:49 compute-0 podman[243952]: 2026-01-05 14:55:49.69871827 +0000 UTC m=+0.156825208 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.183 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.206 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.206 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.207 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.208 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.208 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:50 compute-0 nova_compute[185474]: 2026-01-05 14:55:50.638 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:53 compute-0 nova_compute[185474]: 2026-01-05 14:55:53.826 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:54 compute-0 nova_compute[185474]: 2026-01-05 14:55:54.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:54 compute-0 nova_compute[185474]: 2026-01-05 14:55:54.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 14:55:55 compute-0 nova_compute[185474]: 2026-01-05 14:55:55.641 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:56 compute-0 podman[243993]: 2026-01-05 14:55:56.635950675 +0000 UTC m=+0.111222094 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:55:56 compute-0 podman[243994]: 2026-01-05 14:55:56.639364737 +0000 UTC m=+0.115423627 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:55:57 compute-0 nova_compute[185474]: 2026-01-05 14:55:57.412 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:55:57 compute-0 nova_compute[185474]: 2026-01-05 14:55:57.413 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 14:55:57 compute-0 nova_compute[185474]: 2026-01-05 14:55:57.696 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 14:55:58 compute-0 nova_compute[185474]: 2026-01-05 14:55:58.830 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:55:59 compute-0 podman[201880]: time="2026-01-05T14:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:55:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:55:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 05 14:56:00 compute-0 podman[244034]: 2026-01-05 14:56:00.595088242 +0000 UTC m=+0.090365018 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container)
Jan 05 14:56:00 compute-0 nova_compute[185474]: 2026-01-05 14:56:00.645 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:01 compute-0 openstack_network_exporter[205179]: ERROR   14:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:56:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:56:01 compute-0 openstack_network_exporter[205179]: ERROR   14:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:56:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:56:03 compute-0 nova_compute[185474]: 2026-01-05 14:56:03.833 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:05 compute-0 nova_compute[185474]: 2026-01-05 14:56:05.648 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:08 compute-0 sshd-session[244052]: Invalid user solana from 165.22.168.95 port 43172
Jan 05 14:56:08 compute-0 podman[244054]: 2026-01-05 14:56:08.626435048 +0000 UTC m=+0.091932321 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Jan 05 14:56:08 compute-0 sshd-session[244052]: Connection closed by invalid user solana 165.22.168.95 port 43172 [preauth]
Jan 05 14:56:08 compute-0 nova_compute[185474]: 2026-01-05 14:56:08.836 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:10 compute-0 nova_compute[185474]: 2026-01-05 14:56:10.657 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:13 compute-0 nova_compute[185474]: 2026-01-05 14:56:13.840 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.206 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.239 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Triggering sync for uuid 731f6e65-e951-4af3-aaf3-0322c02b154c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.240 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Triggering sync for uuid bdb0ea32-677c-48d8-ae08-c15ba402d14f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.241 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Triggering sync for uuid f927dce2-97db-41ff-a7bc-a34d4e7486d4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.241 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Triggering sync for uuid bf9485c0-8711-436a-aad0-658ecba71329 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.243 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.243 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.244 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.245 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.245 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.246 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.247 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.248 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "bf9485c0-8711-436a-aad0-658ecba71329" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.405 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:14 compute-0 nova_compute[185474]: 2026-01-05 14:56:14.464 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "bf9485c0-8711-436a-aad0-658ecba71329" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:15 compute-0 nova_compute[185474]: 2026-01-05 14:56:15.662 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:15 compute-0 podman[244073]: 2026-01-05 14:56:15.690982594 +0000 UTC m=+0.153106718 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git)
Jan 05 14:56:18 compute-0 podman[244093]: 2026-01-05 14:56:18.656305941 +0000 UTC m=+0.134854993 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:56:18 compute-0 nova_compute[185474]: 2026-01-05 14:56:18.843 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:20 compute-0 podman[244119]: 2026-01-05 14:56:20.608000691 +0000 UTC m=+0.093871594 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:56:20 compute-0 podman[244120]: 2026-01-05 14:56:20.610800667 +0000 UTC m=+0.081637872 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 05 14:56:20 compute-0 nova_compute[185474]: 2026-01-05 14:56:20.666 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:23 compute-0 nova_compute[185474]: 2026-01-05 14:56:23.845 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:25 compute-0 nova_compute[185474]: 2026-01-05 14:56:25.669 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:27 compute-0 podman[244157]: 2026-01-05 14:56:27.621386563 +0000 UTC m=+0.093449252 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 14:56:27 compute-0 podman[244158]: 2026-01-05 14:56:27.639396561 +0000 UTC m=+0.106931877 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 14:56:28 compute-0 nova_compute[185474]: 2026-01-05 14:56:28.846 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:29 compute-0 podman[201880]: time="2026-01-05T14:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:56:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:56:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4373 "" "Go-http-client/1.1"
Jan 05 14:56:30 compute-0 nova_compute[185474]: 2026-01-05 14:56:30.673 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:31 compute-0 openstack_network_exporter[205179]: ERROR   14:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:56:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:56:31 compute-0 openstack_network_exporter[205179]: ERROR   14:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:56:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:56:31 compute-0 podman[244197]: 2026-01-05 14:56:31.686986974 +0000 UTC m=+0.158265898 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 05 14:56:33 compute-0 nova_compute[185474]: 2026-01-05 14:56:33.849 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:35 compute-0 nova_compute[185474]: 2026-01-05 14:56:35.675 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.753 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.753 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.754 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb54d4a70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.761 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'name': 'vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.765 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.772 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'name': 'vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.777 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'name': 'vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.778 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.782 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:56:37.780090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.894 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 1385624795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.895 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 14233900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:37.896 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.006 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.007 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.007 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.125 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 1228730185 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.125 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 12433569 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.126 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.255 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 1801199740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.256 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 10969023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.256 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.257 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.257 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.258 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.258 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.258 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.258 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.258 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 464426220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.259 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 74874753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.259 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 83046078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:56:38.258425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.259 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.260 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.260 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.260 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 601656532 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.261 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 105953551 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.261 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.latency volume: 68177111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.261 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 545412987 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.262 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 103754380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.262 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 84932339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.264 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.265 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.265 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.265 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.266 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.266 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.266 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.267 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.267 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.268 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.268 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.269 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:56:38.264562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.270 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:56:38.272888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.319 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.320 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.320 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.366 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.366 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.367 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 sshd-session[244216]: Invalid user ubuntu from 45.43.63.237 port 47744
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.406 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.406 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.407 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.455 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.456 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.457 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.458 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:56:38.459364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.467 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.476 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.483 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.489 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.492 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.494 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:56:38.493015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.494 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.495 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.495 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.496 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.496 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.497 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.497 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.498 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.499 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.499 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.500 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:56:38.502730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.505 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.507 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:56:38.506548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.508 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.508 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.509 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.511 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.511 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.512 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.513 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.514 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.515 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.516 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.517 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:56:38.520374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.522 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.522 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.522 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.523 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.523 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.524 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.525 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.526 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.527 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.527 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.528 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.529 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.531 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.533 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:56:38.532340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.575 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/cpu volume: 31120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.601 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 43590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.632 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/cpu volume: 330590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 sshd-session[244216]: Received disconnect from 45.43.63.237 port 47744:11:  [preauth]
Jan 05 14:56:38 compute-0 sshd-session[244216]: Disconnected from invalid user ubuntu 45.43.63.237 port 47744 [preauth]
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.658 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/cpu volume: 32970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.659 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.659 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.659 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.660 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.660 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.660 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.661 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.662 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:56:38.660907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.662 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.663 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.663 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.666 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:56:38.666117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.666 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.667 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.668 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.669 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.670 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.671 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:56:38.672283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.674 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.674 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.675 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.675 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.677 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:56:38.676375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.678 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.678 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.679 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.680 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.681 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.682 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.683 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.683 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.684 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.685 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.686 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.686 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.687 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:56:38.681316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.687 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:56:38.685270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.689 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.690 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.690 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.690 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.691 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:56:38.689223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.692 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.693 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.693 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.694 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes volume: 7502 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.694 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:56:38.692761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.697 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.697 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.697 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.698 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.699 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.699 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.699 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.700 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.700 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.701 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.701 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.702 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.703 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.703 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.704 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/memory.usage volume: 48.9765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.704 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/memory.usage volume: 49.0078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:56:38.696595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:56:38.699928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.707 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.707 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.707 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.708 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.708 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.709 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.709 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.710 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:56:38.703121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:56:38.706908) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.710 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.711 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.711 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.712 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.713 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.714 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.714 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.715 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.715 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.717 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.717 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.718 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.719 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.719 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.719 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.720 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:56:38.713806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.720 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.721 14 DEBUG ceilometer.compute.pollsters [-] bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.721 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.722 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:56:38.717076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.723 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.725 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.726 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.727 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:56:38.728 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:56:38 compute-0 nova_compute[185474]: 2026-01-05 14:56:38.851 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:39 compute-0 podman[244219]: 2026-01-05 14:56:39.63898965 +0000 UTC m=+0.113605937 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 14:56:40 compute-0 nova_compute[185474]: 2026-01-05 14:56:40.678 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:42 compute-0 nova_compute[185474]: 2026-01-05 14:56:42.440 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:43 compute-0 nova_compute[185474]: 2026-01-05 14:56:43.855 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:44 compute-0 nova_compute[185474]: 2026-01-05 14:56:44.393 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:44 compute-0 nova_compute[185474]: 2026-01-05 14:56:44.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:44 compute-0 nova_compute[185474]: 2026-01-05 14:56:44.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:56:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:44.813 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:44.813 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:44.814 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.436 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.437 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.439 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.683 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.783 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.890 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.892 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.955 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:45 compute-0 nova_compute[185474]: 2026-01-05 14:56:45.957 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.027 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.028 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.088 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.095 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.156 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.158 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.216 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.220 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.284 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.288 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.354 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.367 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.437 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.439 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.539 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.540 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.601 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.603 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 podman[244265]: 2026-01-05 14:56:46.641234546 +0000 UTC m=+0.123915167 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.685 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.694 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.774 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.777 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.845 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.847 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.929 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.931 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:56:46 compute-0 nova_compute[185474]: 2026-01-05 14:56:46.997 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.587 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.589 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=72.3542251586914GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.589 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.590 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.958 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.958 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bdb0ea32-677c-48d8-ae08-c15ba402d14f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.958 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.959 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.959 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:56:47 compute-0 nova_compute[185474]: 2026-01-05 14:56:47.959 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:56:48 compute-0 nova_compute[185474]: 2026-01-05 14:56:48.077 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:56:48 compute-0 nova_compute[185474]: 2026-01-05 14:56:48.171 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:56:48 compute-0 nova_compute[185474]: 2026-01-05 14:56:48.175 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:56:48 compute-0 nova_compute[185474]: 2026-01-05 14:56:48.175 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:48 compute-0 nova_compute[185474]: 2026-01-05 14:56:48.858 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:49 compute-0 podman[244305]: 2026-01-05 14:56:49.655523574 +0000 UTC m=+0.143879246 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 05 14:56:50 compute-0 nova_compute[185474]: 2026-01-05 14:56:50.687 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:51 compute-0 nova_compute[185474]: 2026-01-05 14:56:51.177 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:51 compute-0 nova_compute[185474]: 2026-01-05 14:56:51.177 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:56:51 compute-0 nova_compute[185474]: 2026-01-05 14:56:51.422 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:56:51 compute-0 nova_compute[185474]: 2026-01-05 14:56:51.422 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:56:51 compute-0 nova_compute[185474]: 2026-01-05 14:56:51.422 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:56:51 compute-0 podman[244330]: 2026-01-05 14:56:51.658701611 +0000 UTC m=+0.125595153 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:56:51 compute-0 podman[244331]: 2026-01-05 14:56:51.660750307 +0000 UTC m=+0.121047580 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 14:56:52 compute-0 nova_compute[185474]: 2026-01-05 14:56:52.595 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:56:52 compute-0 nova_compute[185474]: 2026-01-05 14:56:52.611 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:56:52 compute-0 nova_compute[185474]: 2026-01-05 14:56:52.612 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:56:52 compute-0 nova_compute[185474]: 2026-01-05 14:56:52.614 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:52 compute-0 nova_compute[185474]: 2026-01-05 14:56:52.615 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:56:53 compute-0 nova_compute[185474]: 2026-01-05 14:56:53.861 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:55 compute-0 nova_compute[185474]: 2026-01-05 14:56:55.691 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.291 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.295 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.296 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.297 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.298 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.301 185478 INFO nova.compute.manager [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Terminating instance
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.304 185478 DEBUG nova.compute.manager [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 14:56:57 compute-0 kernel: tap9e6c6e1b-0a (unregistering): left promiscuous mode
Jan 05 14:56:57 compute-0 NetworkManager[56139]: <info>  [1767625017.3676] device (tap9e6c6e1b-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 14:56:57 compute-0 ovn_controller[97763]: 2026-01-05T14:56:57Z|00050|binding|INFO|Releasing lport 9e6c6e1b-0aed-450f-a239-509674dfe31f from this chassis (sb_readonly=0)
Jan 05 14:56:57 compute-0 ovn_controller[97763]: 2026-01-05T14:56:57Z|00051|binding|INFO|Setting lport 9e6c6e1b-0aed-450f-a239-509674dfe31f down in Southbound
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.380 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_controller[97763]: 2026-01-05T14:56:57Z|00052|binding|INFO|Removing iface tap9e6c6e1b-0a ovn-installed in OVS
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.386 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.395 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:9f:84 192.168.0.224'], port_security=['fa:16:3e:4a:9f:84 192.168.0.224'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-m5q5u5dyljo6-j3mxrhypctaw-port-4zgpnsyftszn', 'neutron:cidrs': '192.168.0.224/24', 'neutron:device_id': 'bdb0ea32-677c-48d8-ae08-c15ba402d14f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-m5q5u5dyljo6-j3mxrhypctaw-port-4zgpnsyftszn', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.238', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=9e6c6e1b-0aed-450f-a239-509674dfe31f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.397 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 9e6c6e1b-0aed-450f-a239-509674dfe31f in datapath 905a1599-2980-4b24-9705-76e3c8a469ea unbound from our chassis
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.400 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.403 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.428 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2a16cc-9858-4e9f-ba4a-9f1493fadd4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 05 14:56:57 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 48.530s CPU time.
Jan 05 14:56:57 compute-0 systemd-machined[156786]: Machine qemu-2-instance-00000002 terminated.
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.475 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5f1316-22da-4383-b3a5-36734c336911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.482 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1a3cf5-4a13-46d9-90f9-605c0ba701f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.524 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[c8faf1af-622a-4119-aae8-3681acc0d3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.548 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.555 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[25180616-4836-47ea-8a26-171d99752ac2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 41299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244386, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.557 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.594 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[50d5400b-de70-4678-b47b-530c10dee272]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244394, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244394, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.598 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.602 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.607 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.608 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.608 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.608 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:56:57 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:57.609 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.624 185478 INFO nova.virt.libvirt.driver [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Instance destroyed successfully.
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.624 185478 DEBUG nova.objects.instance [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'resources' on Instance uuid bdb0ea32-677c-48d8-ae08-c15ba402d14f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.639 185478 DEBUG nova.virt.libvirt.vif [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T14:46:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-m5q5u5dyljo6-j3mxrhypctaw-vnf-tefruvxceuwq',id=2,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T14:46:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-17jyzkt5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T14:46:55Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 14:56:57 compute-0 nova_compute[185474]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzI1MjE2NzU1MTY5MDYzOTgyOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyNTIxNjc1NTE2OTA2Mzk4Mjg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjUyMTY3NTUxNjkwNjM5ODI4PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bdb0ea32-677c-48d8-ae08-c15ba402d14f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.640 185478 DEBUG nova.network.os_vif_util [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.641 185478 DEBUG nova.network.os_vif_util [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.641 185478 DEBUG os_vif [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.643 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.643 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e6c6e1b-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.645 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.647 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.650 185478 INFO os_vif [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:9f:84,bridge_name='br-int',has_traffic_filtering=True,id=9e6c6e1b-0aed-450f-a239-509674dfe31f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9e6c6e1b-0a')
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.651 185478 INFO nova.virt.libvirt.driver [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Deleting instance files /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f_del
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.652 185478 INFO nova.virt.libvirt.driver [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Deletion of /var/lib/nova/instances/bdb0ea32-677c-48d8-ae08-c15ba402d14f_del complete
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.753 185478 DEBUG nova.virt.libvirt.host [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.753 185478 INFO nova.virt.libvirt.host [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] UEFI support detected
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.757 185478 INFO nova.compute.manager [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Took 0.45 seconds to destroy the instance on the hypervisor.
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.758 185478 DEBUG oslo.service.loopingcall [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.759 185478 DEBUG nova.compute.manager [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.759 185478 DEBUG nova.network.neutron [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.813 185478 DEBUG nova.compute.manager [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-vif-unplugged-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.814 185478 DEBUG oslo_concurrency.lockutils [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.814 185478 DEBUG oslo_concurrency.lockutils [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.815 185478 DEBUG oslo_concurrency.lockutils [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.815 185478 DEBUG nova.compute.manager [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] No waiting events found dispatching network-vif-unplugged-9e6c6e1b-0aed-450f-a239-509674dfe31f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:56:57 compute-0 nova_compute[185474]: 2026-01-05 14:56:57.816 185478 DEBUG nova.compute.manager [req-876e380d-2dbe-4b82-a195-c01ced56df1a req-d6ef75e1-79f9-497f-86b3-1c13531d8315 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-vif-unplugged-9e6c6e1b-0aed-450f-a239-509674dfe31f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 14:56:57 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:56:57.639 185478 DEBUG nova.virt.libvirt.vif [None req-2cc2b61c-fb [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:56:58 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:58.047 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.048 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:58 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:56:58.050 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:56:58 compute-0 podman[244409]: 2026-01-05 14:56:58.609562472 +0000 UTC m=+0.095321242 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 14:56:58 compute-0 podman[244408]: 2026-01-05 14:56:58.613115358 +0000 UTC m=+0.103985337 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.806 185478 DEBUG nova.compute.manager [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-changed-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.807 185478 DEBUG nova.compute.manager [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Refreshing instance network info cache due to event network-changed-9e6c6e1b-0aed-450f-a239-509674dfe31f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.807 185478 DEBUG oslo_concurrency.lockutils [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.807 185478 DEBUG oslo_concurrency.lockutils [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.808 185478 DEBUG nova.network.neutron [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Refreshing network info cache for port 9e6c6e1b-0aed-450f-a239-509674dfe31f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:56:58 compute-0 nova_compute[185474]: 2026-01-05 14:56:58.863 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:56:59 compute-0 podman[201880]: time="2026-01-05T14:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:56:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:56:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4377 "" "Go-http-client/1.1"
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.981 185478 DEBUG nova.compute.manager [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.982 185478 DEBUG oslo_concurrency.lockutils [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.982 185478 DEBUG oslo_concurrency.lockutils [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.983 185478 DEBUG oslo_concurrency.lockutils [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.983 185478 DEBUG nova.compute.manager [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] No waiting events found dispatching network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.983 185478 WARNING nova.compute.manager [req-7ca2759e-7af6-4c63-8606-695c1d2b6e19 req-9928c37a-4229-4a2e-9917-caf6bddb3b3e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Received unexpected event network-vif-plugged-9e6c6e1b-0aed-450f-a239-509674dfe31f for instance with vm_state active and task_state deleting.
Jan 05 14:56:59 compute-0 nova_compute[185474]: 2026-01-05 14:56:59.987 185478 DEBUG nova.network.neutron [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.018 185478 INFO nova.compute.manager [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Took 2.26 seconds to deallocate network for instance.
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.102 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.103 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.222 185478 DEBUG nova.compute.provider_tree [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.384 185478 DEBUG nova.scheduler.client.report [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.461 185478 DEBUG nova.network.neutron [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updated VIF entry in instance network info cache for port 9e6c6e1b-0aed-450f-a239-509674dfe31f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.462 185478 DEBUG nova.network.neutron [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Updating instance_info_cache with network_info: [{"id": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "address": "fa:16:3e:4a:9f:84", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.224", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e6c6e1b-0a", "ovs_interfaceid": "9e6c6e1b-0aed-450f-a239-509674dfe31f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.467 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.484 185478 DEBUG oslo_concurrency.lockutils [req-cc5b20ce-8d57-4bf1-a57e-e4b3839d82ee req-8e285c14-a669-4f5e-991e-1aa063fec9d9 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-bdb0ea32-677c-48d8-ae08-c15ba402d14f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.552 185478 INFO nova.scheduler.client.report [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Deleted allocations for instance bdb0ea32-677c-48d8-ae08-c15ba402d14f
Jan 05 14:57:00 compute-0 nova_compute[185474]: 2026-01-05 14:57:00.655 185478 DEBUG oslo_concurrency.lockutils [None req-2cc2b61c-fbd9-4733-a52d-cfd7f08a9f2a 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bdb0ea32-677c-48d8-ae08-c15ba402d14f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:57:01 compute-0 openstack_network_exporter[205179]: ERROR   14:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:57:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:57:01 compute-0 openstack_network_exporter[205179]: ERROR   14:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:57:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:57:02 compute-0 podman[244450]: 2026-01-05 14:57:02.643354112 +0000 UTC m=+0.105731545 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Jan 05 14:57:02 compute-0 nova_compute[185474]: 2026-01-05 14:57:02.646 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:03 compute-0 nova_compute[185474]: 2026-01-05 14:57:03.865 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:06 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:57:06.053 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:57:07 compute-0 nova_compute[185474]: 2026-01-05 14:57:07.648 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:08 compute-0 nova_compute[185474]: 2026-01-05 14:57:08.869 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:10 compute-0 podman[244470]: 2026-01-05 14:57:10.623771741 +0000 UTC m=+0.100400560 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 05 14:57:12 compute-0 nova_compute[185474]: 2026-01-05 14:57:12.621 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625017.6204748, bdb0ea32-677c-48d8-ae08-c15ba402d14f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:57:12 compute-0 nova_compute[185474]: 2026-01-05 14:57:12.622 185478 INFO nova.compute.manager [-] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] VM Stopped (Lifecycle Event)
Jan 05 14:57:12 compute-0 nova_compute[185474]: 2026-01-05 14:57:12.651 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:12 compute-0 nova_compute[185474]: 2026-01-05 14:57:12.656 185478 DEBUG nova.compute.manager [None req-52aa4d97-d359-4f72-924a-bf82bb5851f1 - - - - - -] [instance: bdb0ea32-677c-48d8-ae08-c15ba402d14f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:57:13 compute-0 nova_compute[185474]: 2026-01-05 14:57:13.872 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:17 compute-0 podman[244489]: 2026-01-05 14:57:17.637628863 +0000 UTC m=+0.116555598 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, version=9.6, vcs-type=git, distribution-scope=public)
Jan 05 14:57:17 compute-0 nova_compute[185474]: 2026-01-05 14:57:17.654 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:18 compute-0 nova_compute[185474]: 2026-01-05 14:57:18.875 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:20 compute-0 podman[244511]: 2026-01-05 14:57:20.626743258 +0000 UTC m=+0.109208929 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:57:22 compute-0 podman[244536]: 2026-01-05 14:57:22.620337621 +0000 UTC m=+0.091835989 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:57:22 compute-0 podman[244537]: 2026-01-05 14:57:22.637033693 +0000 UTC m=+0.105318443 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:57:22 compute-0 nova_compute[185474]: 2026-01-05 14:57:22.656 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:23 compute-0 nova_compute[185474]: 2026-01-05 14:57:23.879 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:27 compute-0 nova_compute[185474]: 2026-01-05 14:57:27.658 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:28 compute-0 nova_compute[185474]: 2026-01-05 14:57:28.882 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:29 compute-0 podman[244578]: 2026-01-05 14:57:29.609266629 +0000 UTC m=+0.080276895 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 14:57:29 compute-0 podman[244577]: 2026-01-05 14:57:29.654907645 +0000 UTC m=+0.125769097 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:57:29 compute-0 podman[201880]: time="2026-01-05T14:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:57:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:57:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 05 14:57:31 compute-0 openstack_network_exporter[205179]: ERROR   14:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:57:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:57:31 compute-0 openstack_network_exporter[205179]: ERROR   14:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:57:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:57:31 compute-0 ovn_controller[97763]: 2026-01-05T14:57:31Z|00053|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 05 14:57:32 compute-0 nova_compute[185474]: 2026-01-05 14:57:32.661 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:33 compute-0 podman[244619]: 2026-01-05 14:57:33.651177107 +0000 UTC m=+0.130471335 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Jan 05 14:57:33 compute-0 nova_compute[185474]: 2026-01-05 14:57:33.886 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:37 compute-0 nova_compute[185474]: 2026-01-05 14:57:37.663 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:38 compute-0 nova_compute[185474]: 2026-01-05 14:57:38.888 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:41 compute-0 podman[244639]: 2026-01-05 14:57:41.66587631 +0000 UTC m=+0.148762800 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Jan 05 14:57:42 compute-0 nova_compute[185474]: 2026-01-05 14:57:42.665 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:43 compute-0 nova_compute[185474]: 2026-01-05 14:57:43.892 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:44 compute-0 nova_compute[185474]: 2026-01-05 14:57:44.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:57:44.814 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:57:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:57:44.815 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:57:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:57:44.815 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.440 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.440 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.578 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.640 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.641 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.719 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.721 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.793 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.794 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.888 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.897 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.996 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:45 compute-0 nova_compute[185474]: 2026-01-05 14:57:45.997 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.063 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.064 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.153 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.155 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.238 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.245 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.345 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.347 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.427 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.428 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.512 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.513 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.591 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.971 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.972 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4782MB free_disk=72.37672805786133GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.973 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:57:46 compute-0 nova_compute[185474]: 2026-01-05 14:57:46.973 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.048 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.049 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.049 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.049 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.050 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.132 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.148 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.172 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.173 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:57:47 compute-0 nova_compute[185474]: 2026-01-05 14:57:47.667 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:48 compute-0 nova_compute[185474]: 2026-01-05 14:57:48.175 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:48 compute-0 nova_compute[185474]: 2026-01-05 14:57:48.176 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:48 compute-0 nova_compute[185474]: 2026-01-05 14:57:48.177 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:57:48 compute-0 podman[244695]: 2026-01-05 14:57:48.627498465 +0000 UTC m=+0.111160752 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350)
Jan 05 14:57:48 compute-0 nova_compute[185474]: 2026-01-05 14:57:48.894 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:49 compute-0 nova_compute[185474]: 2026-01-05 14:57:49.396 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:49 compute-0 nova_compute[185474]: 2026-01-05 14:57:49.556 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:50 compute-0 nova_compute[185474]: 2026-01-05 14:57:50.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:50 compute-0 nova_compute[185474]: 2026-01-05 14:57:50.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:57:51 compute-0 nova_compute[185474]: 2026-01-05 14:57:51.347 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:57:51 compute-0 nova_compute[185474]: 2026-01-05 14:57:51.348 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:57:51 compute-0 nova_compute[185474]: 2026-01-05 14:57:51.348 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:57:51 compute-0 podman[244717]: 2026-01-05 14:57:51.678827234 +0000 UTC m=+0.156464179 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 14:57:52 compute-0 nova_compute[185474]: 2026-01-05 14:57:52.672 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:53 compute-0 podman[244741]: 2026-01-05 14:57:53.627540932 +0000 UTC m=+0.089568097 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 14:57:53 compute-0 podman[244740]: 2026-01-05 14:57:53.62895108 +0000 UTC m=+0.102975290 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 14:57:53 compute-0 nova_compute[185474]: 2026-01-05 14:57:53.800 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:57:53 compute-0 nova_compute[185474]: 2026-01-05 14:57:53.898 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:54 compute-0 nova_compute[185474]: 2026-01-05 14:57:54.661 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:57:54 compute-0 nova_compute[185474]: 2026-01-05 14:57:54.662 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:57:54 compute-0 nova_compute[185474]: 2026-01-05 14:57:54.662 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:57:57 compute-0 nova_compute[185474]: 2026-01-05 14:57:57.675 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:58 compute-0 nova_compute[185474]: 2026-01-05 14:57:58.902 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:57:59 compute-0 podman[201880]: time="2026-01-05T14:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:57:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:57:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 05 14:58:00 compute-0 podman[244783]: 2026-01-05 14:58:00.652679915 +0000 UTC m=+0.124104831 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:58:00 compute-0 podman[244784]: 2026-01-05 14:58:00.654504135 +0000 UTC m=+0.117354819 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:58:01 compute-0 openstack_network_exporter[205179]: ERROR   14:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:58:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:58:01 compute-0 openstack_network_exporter[205179]: ERROR   14:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:58:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:58:02 compute-0 nova_compute[185474]: 2026-01-05 14:58:02.679 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:03 compute-0 nova_compute[185474]: 2026-01-05 14:58:03.904 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:04 compute-0 podman[244823]: 2026-01-05 14:58:04.652872266 +0000 UTC m=+0.120920316 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler)
Jan 05 14:58:07 compute-0 nova_compute[185474]: 2026-01-05 14:58:07.683 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:08 compute-0 nova_compute[185474]: 2026-01-05 14:58:08.906 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:12 compute-0 podman[244843]: 2026-01-05 14:58:12.631624802 +0000 UTC m=+0.108003797 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 14:58:12 compute-0 nova_compute[185474]: 2026-01-05 14:58:12.686 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:13 compute-0 nova_compute[185474]: 2026-01-05 14:58:13.909 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:17 compute-0 nova_compute[185474]: 2026-01-05 14:58:17.689 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:18 compute-0 nova_compute[185474]: 2026-01-05 14:58:18.913 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:19 compute-0 podman[244863]: 2026-01-05 14:58:19.632422291 +0000 UTC m=+0.121047750 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Jan 05 14:58:22 compute-0 nova_compute[185474]: 2026-01-05 14:58:22.691 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:22 compute-0 podman[244883]: 2026-01-05 14:58:22.709643855 +0000 UTC m=+0.185792633 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 14:58:23 compute-0 nova_compute[185474]: 2026-01-05 14:58:23.915 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:24 compute-0 podman[244909]: 2026-01-05 14:58:24.596093986 +0000 UTC m=+0.076217525 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 14:58:24 compute-0 podman[244908]: 2026-01-05 14:58:24.623107838 +0000 UTC m=+0.112007984 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:58:25 compute-0 sshd-session[244948]: Received disconnect from 193.46.255.103 port 31984:11:  [preauth]
Jan 05 14:58:25 compute-0 sshd-session[244948]: Disconnected from authenticating user root 193.46.255.103 port 31984 [preauth]
Jan 05 14:58:27 compute-0 nova_compute[185474]: 2026-01-05 14:58:27.694 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:28 compute-0 nova_compute[185474]: 2026-01-05 14:58:28.918 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:29 compute-0 podman[201880]: time="2026-01-05T14:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:58:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:58:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4381 "" "Go-http-client/1.1"
Jan 05 14:58:31 compute-0 openstack_network_exporter[205179]: ERROR   14:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:58:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:58:31 compute-0 openstack_network_exporter[205179]: ERROR   14:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:58:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:58:31 compute-0 podman[244951]: 2026-01-05 14:58:31.644903244 +0000 UTC m=+0.114095022 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:58:31 compute-0 podman[244950]: 2026-01-05 14:58:31.656032345 +0000 UTC m=+0.123710042 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 14:58:32 compute-0 nova_compute[185474]: 2026-01-05 14:58:32.697 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:33 compute-0 nova_compute[185474]: 2026-01-05 14:58:33.920 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:35 compute-0 podman[244992]: 2026-01-05 14:58:35.614172406 +0000 UTC m=+0.106391923 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Jan 05 14:58:37 compute-0 nova_compute[185474]: 2026-01-05 14:58:37.699 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.754 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.755 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.762 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'name': 'vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2cf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.767 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.770 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'name': 'vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.771 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.771 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.771 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.771 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T14:58:37.771641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.851 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 1385624795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.852 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 14233900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.853 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.967 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.967 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:37.968 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.111 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 1801199740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.112 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 10969023 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.112 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.114 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.114 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.114 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.115 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 464426220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.115 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 74874753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T14:58:38.114725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.116 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 83046078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.116 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.117 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.117 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.117 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 545412987 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.118 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 103754380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.119 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.latency volume: 84932339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.119 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.120 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.121 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T14:58:38.120952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.121 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.122 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.122 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.123 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.123 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.124 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.124 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.124 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T14:58:38.126889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.165 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.166 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.166 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.213 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.214 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.214 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.258 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.259 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.260 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.261 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.262 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.262 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.262 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.262 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T14:58:38.262737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.270 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.277 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.283 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.286 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.287 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.288 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.289 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.290 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.290 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.291 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.292 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.293 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.293 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T14:58:38.287030) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.295 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T14:58:38.296314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.299 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T14:58:38.299104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.300 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.300 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.301 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.301 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.301 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.302 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.302 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.303 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.304 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T14:58:38.305297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.305 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.306 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.306 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.307 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.307 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.308 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.309 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.309 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.309 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T14:58:38.310823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.352 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/cpu volume: 32830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.393 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 45390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.435 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/cpu volume: 34730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.436 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.437 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T14:58:38.436527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.439 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.439 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T14:58:38.438660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.441 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T14:58:38.440940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.442 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.443 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.443 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T14:58:38.442733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.445 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.445 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.445 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T14:58:38.444873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.447 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T14:58:38.446641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.447 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.448 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.449 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T14:58:38.448477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.451 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T14:58:38.450387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.452 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.453 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.453 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T14:58:38.452409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.454 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.455 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.455 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T14:58:38.454373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.456 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/memory.usage volume: 49.046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.456 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T14:58:38.456356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.457 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/memory.usage volume: 48.88671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T14:58:38.458479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.459 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.459 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.459 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.460 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.460 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.460 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.460 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.461 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.462 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.463 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.464 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.464 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.465 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.465 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.466 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.466 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.466 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.467 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.467 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.467 14 DEBUG ceilometer.compute.pollsters [-] f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T14:58:38.462398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T14:58:38.464613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 14:58:38.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 14:58:38 compute-0 nova_compute[185474]: 2026-01-05 14:58:38.922 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:42 compute-0 nova_compute[185474]: 2026-01-05 14:58:42.700 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:43 compute-0 podman[245012]: 2026-01-05 14:58:43.614613175 +0000 UTC m=+0.106998468 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 14:58:43 compute-0 nova_compute[185474]: 2026-01-05 14:58:43.930 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:58:44.815 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:58:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:58:44.816 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:58:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:58:44.817 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.433 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.434 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.435 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.435 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.565 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.665 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.668 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.765 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.767 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.850 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.851 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.963 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:46 compute-0 nova_compute[185474]: 2026-01-05 14:58:46.970 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.055 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.057 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.121 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.122 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.220 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.222 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.323 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.333 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.418 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.420 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.515 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.517 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.598 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.600 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.697 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:58:47 compute-0 nova_compute[185474]: 2026-01-05 14:58:47.704 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.226 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.227 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4771MB free_disk=72.37678527832031GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.228 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.228 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.360 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.360 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance f927dce2-97db-41ff-a7bc-a34d4e7486d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.361 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.361 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.361 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.428 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.443 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.444 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.445 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:58:48 compute-0 nova_compute[185474]: 2026-01-05 14:58:48.934 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:49 compute-0 nova_compute[185474]: 2026-01-05 14:58:49.444 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:49 compute-0 nova_compute[185474]: 2026-01-05 14:58:49.445 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:49 compute-0 nova_compute[185474]: 2026-01-05 14:58:49.445 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:49 compute-0 nova_compute[185474]: 2026-01-05 14:58:49.445 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:49 compute-0 nova_compute[185474]: 2026-01-05 14:58:49.446 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.401 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.401 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.626 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.626 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.626 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:58:50 compute-0 nova_compute[185474]: 2026-01-05 14:58:50.627 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:58:50 compute-0 podman[245071]: 2026-01-05 14:58:50.655873105 +0000 UTC m=+0.129476067 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=openstack_network_exporter)
Jan 05 14:58:52 compute-0 nova_compute[185474]: 2026-01-05 14:58:52.708 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:52 compute-0 nova_compute[185474]: 2026-01-05 14:58:52.894 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:58:52 compute-0 nova_compute[185474]: 2026-01-05 14:58:52.914 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:58:52 compute-0 nova_compute[185474]: 2026-01-05 14:58:52.915 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:58:52 compute-0 nova_compute[185474]: 2026-01-05 14:58:52.916 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:58:53 compute-0 podman[245092]: 2026-01-05 14:58:53.70244772 +0000 UTC m=+0.165704739 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 14:58:53 compute-0 nova_compute[185474]: 2026-01-05 14:58:53.938 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:55 compute-0 podman[245119]: 2026-01-05 14:58:55.591016251 +0000 UTC m=+0.076429722 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:58:55 compute-0 podman[245118]: 2026-01-05 14:58:55.593496048 +0000 UTC m=+0.087740048 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:58:57 compute-0 nova_compute[185474]: 2026-01-05 14:58:57.711 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:58 compute-0 nova_compute[185474]: 2026-01-05 14:58:58.940 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:58:59 compute-0 podman[201880]: time="2026-01-05T14:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:58:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:58:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.227 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.229 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.229 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.230 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.231 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.234 185478 INFO nova.compute.manager [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Terminating instance
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.236 185478 DEBUG nova.compute.manager [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 14:59:00 compute-0 kernel: tap4d2a5913-5b (unregistering): left promiscuous mode
Jan 05 14:59:00 compute-0 NetworkManager[56139]: <info>  [1767625140.3586] device (tap4d2a5913-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 14:59:00 compute-0 ovn_controller[97763]: 2026-01-05T14:59:00Z|00054|binding|INFO|Releasing lport 4d2a5913-5bee-4ecb-8f19-5653e42acc47 from this chassis (sb_readonly=0)
Jan 05 14:59:00 compute-0 ovn_controller[97763]: 2026-01-05T14:59:00Z|00055|binding|INFO|Setting lport 4d2a5913-5bee-4ecb-8f19-5653e42acc47 down in Southbound
Jan 05 14:59:00 compute-0 ovn_controller[97763]: 2026-01-05T14:59:00Z|00056|binding|INFO|Removing iface tap4d2a5913-5b ovn-installed in OVS
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.372 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.374 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.379 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:98:05 192.168.0.34'], port_security=['fa:16:3e:84:98:05 192.168.0.34'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-xcfguwxpygfw-nks53nwkysgu-port-2omiqc7m4ytm', 'neutron:cidrs': '192.168.0.34/24', 'neutron:device_id': 'f927dce2-97db-41ff-a7bc-a34d4e7486d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-xcfguwxpygfw-nks53nwkysgu-port-2omiqc7m4ytm', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=4d2a5913-5bee-4ecb-8f19-5653e42acc47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.380 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 4d2a5913-5bee-4ecb-8f19-5653e42acc47 in datapath 905a1599-2980-4b24-9705-76e3c8a469ea unbound from our chassis
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.381 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.396 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.403 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f388cfc5-52e6-4e85-b10d-34a5b46cc9d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.442 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[6de9a57b-0dec-46bb-8c69-d8774faed804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 36.988s CPU time.
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.447 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[19e30998-6749-42b0-86b4-11da968f96bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 systemd-machined[156786]: Machine qemu-3-instance-00000003 terminated.
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.487 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[50da2e0e-fd2a-4b7e-ac33-0066623f63fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.509 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[57d4984a-2f27-41a4-a10c-ea3f6f939714]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 13, 'rx_bytes': 658, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 20014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245189, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.527 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[477c4dbd-2190-4e0a-b621-d9508bd35a3f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245195, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245195, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.529 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.533 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.534 185478 INFO nova.virt.libvirt.driver [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Instance destroyed successfully.
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.535 185478 DEBUG nova.objects.instance [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'resources' on Instance uuid f927dce2-97db-41ff-a7bc-a34d4e7486d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.540 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.540 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.540 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.541 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:59:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:00.541 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.610 185478 DEBUG nova.virt.libvirt.vif [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T14:51:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-xcfguwxpygfw-nks53nwkysgu-vnf-q3vvgayg7sek',id=3,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T14:51:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-04w16ma5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T14:51:11Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mzk5ODEwMTk4NjM4NzUzNjQ0MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 14:59:00 compute-0 nova_compute[185474]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mzk5ODEwMTk4NjM4NzUzNjQ0MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM5OTgxMDE5ODYzODc1MzY0NDE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zOTk4MTAxOTg2Mzg3NTM2NDQxPT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=f927dce2-97db-41ff-a7bc-a34d4e7486d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.611 185478 DEBUG nova.network.os_vif_util [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.612 185478 DEBUG nova.network.os_vif_util [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.613 185478 DEBUG os_vif [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.615 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.615 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d2a5913-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.618 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.619 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.621 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.625 185478 INFO os_vif [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:98:05,bridge_name='br-int',has_traffic_filtering=True,id=4d2a5913-5bee-4ecb-8f19-5653e42acc47,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4d2a5913-5b')
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.625 185478 INFO nova.virt.libvirt.driver [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Deleting instance files /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4_del
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.626 185478 INFO nova.virt.libvirt.driver [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Deletion of /var/lib/nova/instances/f927dce2-97db-41ff-a7bc-a34d4e7486d4_del complete
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.715 185478 INFO nova.compute.manager [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Took 0.48 seconds to destroy the instance on the hypervisor.
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.716 185478 DEBUG oslo.service.loopingcall [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.717 185478 DEBUG nova.compute.manager [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.718 185478 DEBUG nova.network.neutron [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.770 185478 DEBUG nova.compute.manager [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-vif-unplugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.770 185478 DEBUG oslo_concurrency.lockutils [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.771 185478 DEBUG oslo_concurrency.lockutils [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.771 185478 DEBUG oslo_concurrency.lockutils [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.772 185478 DEBUG nova.compute.manager [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] No waiting events found dispatching network-vif-unplugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:59:00 compute-0 nova_compute[185474]: 2026-01-05 14:59:00.772 185478 DEBUG nova.compute.manager [req-4cfd4384-676c-4377-a22a-27402f762ee3 req-1c184b91-43e1-4c86-b1dd-72913917a55e 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-vif-unplugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 14:59:00 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 14:59:00.610 185478 DEBUG nova.virt.libvirt.vif [None req-ee543aae-21 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 14:59:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:01.024 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 14:59:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:01.025 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.026 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:01 compute-0 sshd-session[245199]: Invalid user sol from 165.22.168.95 port 58102
Jan 05 14:59:01 compute-0 sshd-session[245199]: Connection closed by invalid user sol 165.22.168.95 port 58102 [preauth]
Jan 05 14:59:01 compute-0 openstack_network_exporter[205179]: ERROR   14:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:59:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:59:01 compute-0 openstack_network_exporter[205179]: ERROR   14:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:59:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.918 185478 DEBUG nova.compute.manager [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-changed-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.919 185478 DEBUG nova.compute.manager [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Refreshing instance network info cache due to event network-changed-4d2a5913-5bee-4ecb-8f19-5653e42acc47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.919 185478 DEBUG oslo_concurrency.lockutils [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.919 185478 DEBUG oslo_concurrency.lockutils [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:59:01 compute-0 nova_compute[185474]: 2026-01-05 14:59:01.920 185478 DEBUG nova.network.neutron [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Refreshing network info cache for port 4d2a5913-5bee-4ecb-8f19-5653e42acc47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 14:59:02 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:02.028 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.479 185478 DEBUG nova.network.neutron [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.501 185478 INFO nova.compute.manager [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Took 1.78 seconds to deallocate network for instance.
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.571 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.573 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:02 compute-0 podman[245201]: 2026-01-05 14:59:02.662292379 +0000 UTC m=+0.132759578 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 05 14:59:02 compute-0 podman[245202]: 2026-01-05 14:59:02.676940545 +0000 UTC m=+0.147316821 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.715 185478 DEBUG nova.compute.provider_tree [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.730 185478 DEBUG nova.scheduler.client.report [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.764 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.824 185478 INFO nova.scheduler.client.report [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Deleted allocations for instance f927dce2-97db-41ff-a7bc-a34d4e7486d4
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.897 185478 DEBUG oslo_concurrency.lockutils [None req-ee543aae-2119-4670-9230-be072c7b2790 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.931 185478 DEBUG nova.compute.manager [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.931 185478 DEBUG oslo_concurrency.lockutils [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.932 185478 DEBUG oslo_concurrency.lockutils [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.932 185478 DEBUG oslo_concurrency.lockutils [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "f927dce2-97db-41ff-a7bc-a34d4e7486d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.933 185478 DEBUG nova.compute.manager [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] No waiting events found dispatching network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 14:59:02 compute-0 nova_compute[185474]: 2026-01-05 14:59:02.933 185478 WARNING nova.compute.manager [req-99245b47-c837-49b2-9bd3-9d368bd522b5 req-e328890c-6632-44a8-9786-5a9d70179c09 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Received unexpected event network-vif-plugged-4d2a5913-5bee-4ecb-8f19-5653e42acc47 for instance with vm_state deleted and task_state None.
Jan 05 14:59:03 compute-0 nova_compute[185474]: 2026-01-05 14:59:03.415 185478 DEBUG nova.network.neutron [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updated VIF entry in instance network info cache for port 4d2a5913-5bee-4ecb-8f19-5653e42acc47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 14:59:03 compute-0 nova_compute[185474]: 2026-01-05 14:59:03.417 185478 DEBUG nova.network.neutron [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Updating instance_info_cache with network_info: [{"id": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "address": "fa:16:3e:84:98:05", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d2a5913-5b", "ovs_interfaceid": "4d2a5913-5bee-4ecb-8f19-5653e42acc47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:59:03 compute-0 nova_compute[185474]: 2026-01-05 14:59:03.447 185478 DEBUG oslo_concurrency.lockutils [req-1699103d-ff63-4d63-8213-396df10e1ecb req-a0522ccd-3a4d-4a30-8df2-25ecc5c84c8f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-f927dce2-97db-41ff-a7bc-a34d4e7486d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:59:03 compute-0 nova_compute[185474]: 2026-01-05 14:59:03.945 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:05 compute-0 nova_compute[185474]: 2026-01-05 14:59:05.619 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:06 compute-0 podman[245242]: 2026-01-05 14:59:06.67698299 +0000 UTC m=+0.150150608 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, maintainer=Red Hat, Inc.)
Jan 05 14:59:08 compute-0 nova_compute[185474]: 2026-01-05 14:59:08.948 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:10 compute-0 nova_compute[185474]: 2026-01-05 14:59:10.622 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:13 compute-0 nova_compute[185474]: 2026-01-05 14:59:13.951 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:14 compute-0 podman[245262]: 2026-01-05 14:59:14.651050276 +0000 UTC m=+0.124619126 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 14:59:15 compute-0 nova_compute[185474]: 2026-01-05 14:59:15.530 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625140.529118, f927dce2-97db-41ff-a7bc-a34d4e7486d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:59:15 compute-0 nova_compute[185474]: 2026-01-05 14:59:15.532 185478 INFO nova.compute.manager [-] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] VM Stopped (Lifecycle Event)
Jan 05 14:59:15 compute-0 nova_compute[185474]: 2026-01-05 14:59:15.572 185478 DEBUG nova.compute.manager [None req-acae116c-adf8-4d70-ab4a-c38df90d43fe - - - - - -] [instance: f927dce2-97db-41ff-a7bc-a34d4e7486d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:59:15 compute-0 nova_compute[185474]: 2026-01-05 14:59:15.626 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:18 compute-0 nova_compute[185474]: 2026-01-05 14:59:18.955 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:20 compute-0 nova_compute[185474]: 2026-01-05 14:59:20.630 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:21 compute-0 podman[245284]: 2026-01-05 14:59:21.651385441 +0000 UTC m=+0.123847753 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 05 14:59:23 compute-0 nova_compute[185474]: 2026-01-05 14:59:23.958 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:24 compute-0 podman[245304]: 2026-01-05 14:59:24.737175399 +0000 UTC m=+0.213782004 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 14:59:25 compute-0 nova_compute[185474]: 2026-01-05 14:59:25.634 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:26 compute-0 podman[245328]: 2026-01-05 14:59:26.953867499 +0000 UTC m=+0.115192268 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 14:59:26 compute-0 podman[245329]: 2026-01-05 14:59:26.961789524 +0000 UTC m=+0.110435218 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 14:59:28 compute-0 nova_compute[185474]: 2026-01-05 14:59:28.961 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:29 compute-0 podman[201880]: time="2026-01-05T14:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:59:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:59:29 compute-0 podman[201880]: @ - - [05/Jan/2026:14:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4382 "" "Go-http-client/1.1"
Jan 05 14:59:30 compute-0 nova_compute[185474]: 2026-01-05 14:59:30.637 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:31 compute-0 openstack_network_exporter[205179]: ERROR   14:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 14:59:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:59:31 compute-0 openstack_network_exporter[205179]: ERROR   14:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 14:59:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 14:59:33 compute-0 podman[245370]: 2026-01-05 14:59:33.611128643 +0000 UTC m=+0.095506515 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi)
Jan 05 14:59:33 compute-0 podman[245371]: 2026-01-05 14:59:33.646121822 +0000 UTC m=+0.117107450 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 14:59:33 compute-0 nova_compute[185474]: 2026-01-05 14:59:33.964 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:34 compute-0 ovn_controller[97763]: 2026-01-05T14:59:34Z|00057|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 05 14:59:35 compute-0 nova_compute[185474]: 2026-01-05 14:59:35.641 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:36 compute-0 sshd-session[245409]: Accepted publickey for zuul from 38.102.83.65 port 43898 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 14:59:36 compute-0 systemd-logind[795]: New session 30 of user zuul.
Jan 05 14:59:36 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 05 14:59:36 compute-0 sshd-session[245409]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 14:59:36 compute-0 podman[245411]: 2026-01-05 14:59:36.971115627 +0000 UTC m=+0.112103955 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, config_id=kepler, version=9.4, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=)
Jan 05 14:59:37 compute-0 sudo[245607]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdlovjflllozuqnmilszrjhsasvahrpu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767625177.0497255-59436-63792820584533/AnsiballZ_command.py'
Jan 05 14:59:37 compute-0 sudo[245607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 14:59:38 compute-0 python3[245609]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 14:59:38 compute-0 sudo[245607]: pam_unix(sudo:session): session closed for user root
Jan 05 14:59:38 compute-0 nova_compute[185474]: 2026-01-05 14:59:38.968 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:40 compute-0 nova_compute[185474]: 2026-01-05 14:59:40.644 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:43 compute-0 nova_compute[185474]: 2026-01-05 14:59:43.973 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:44.817 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:44.817 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 14:59:44.817 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:44 compute-0 podman[245646]: 2026-01-05 14:59:44.865422726 +0000 UTC m=+0.148152663 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 14:59:45 compute-0 nova_compute[185474]: 2026-01-05 14:59:45.646 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.457 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.458 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.582 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.694 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.696 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.788 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.789 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.865 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.866 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.924 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.935 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.997 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:46 compute-0 nova_compute[185474]: 2026-01-05 14:59:46.998 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.063 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.064 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.152 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.153 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.235 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.843 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.846 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4951MB free_disk=72.39873886108398GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.848 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.849 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.948 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.948 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.949 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.949 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 14:59:47 compute-0 nova_compute[185474]: 2026-01-05 14:59:47.998 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:59:48 compute-0 nova_compute[185474]: 2026-01-05 14:59:48.012 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:59:48 compute-0 nova_compute[185474]: 2026-01-05 14:59:48.032 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 14:59:48 compute-0 nova_compute[185474]: 2026-01-05 14:59:48.032 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:48 compute-0 nova_compute[185474]: 2026-01-05 14:59:48.976 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:49 compute-0 nova_compute[185474]: 2026-01-05 14:59:49.028 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:49 compute-0 nova_compute[185474]: 2026-01-05 14:59:49.028 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:49 compute-0 nova_compute[185474]: 2026-01-05 14:59:49.029 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:49 compute-0 nova_compute[185474]: 2026-01-05 14:59:49.029 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 14:59:49 compute-0 nova_compute[185474]: 2026-01-05 14:59:49.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:50 compute-0 nova_compute[185474]: 2026-01-05 14:59:50.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:50 compute-0 nova_compute[185474]: 2026-01-05 14:59:50.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:50 compute-0 nova_compute[185474]: 2026-01-05 14:59:50.649 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:51 compute-0 nova_compute[185474]: 2026-01-05 14:59:51.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:51 compute-0 nova_compute[185474]: 2026-01-05 14:59:51.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 14:59:52 compute-0 podman[245692]: 2026-01-05 14:59:52.646253554 +0000 UTC m=+0.125368834 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 14:59:52 compute-0 nova_compute[185474]: 2026-01-05 14:59:52.658 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 14:59:52 compute-0 nova_compute[185474]: 2026-01-05 14:59:52.659 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 14:59:52 compute-0 nova_compute[185474]: 2026-01-05 14:59:52.660 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 14:59:53 compute-0 nova_compute[185474]: 2026-01-05 14:59:53.979 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.322 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.323 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.350 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.431 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.431 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.440 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.440 185478 INFO nova.compute.claims [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Claim successful on node compute-0.ctlplane.example.com
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.523 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.557 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.558 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.559 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.604 185478 DEBUG nova.compute.provider_tree [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.619 185478 DEBUG nova.scheduler.client.report [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.637 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.638 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.689 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.711 185478 INFO nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.753 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.849 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.851 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.851 185478 INFO nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Creating image(s)
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.852 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.852 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.853 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.853 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:54 compute-0 nova_compute[185474]: 2026-01-05 14:59:54.854 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:55 compute-0 podman[245712]: 2026-01-05 14:59:55.642533707 +0000 UTC m=+0.128143119 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 05 14:59:55 compute-0 nova_compute[185474]: 2026-01-05 14:59:55.654 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.026 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.129 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.130 185478 DEBUG nova.virt.images [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] 91284341-6111-4ea9-b089-0808c57a7892 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.131 185478 DEBUG nova.privsep.utils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.132 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.part /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.328 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.part /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.converted" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.332 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.423 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24.converted --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.425 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.453 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.540 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.542 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.542 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.557 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.574 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.630 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.631 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24,backing_fmt=raw /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.675 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24,backing_fmt=raw /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.677 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "1c3e977bc59228847548b5a3a9b9c61b73459c24" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.677 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.737 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1c3e977bc59228847548b5a3a9b9c61b73459c24 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.738 185478 DEBUG nova.virt.disk.api [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Checking if we can resize image /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.739 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.807 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.808 185478 DEBUG nova.virt.disk.api [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Cannot resize image /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.809 185478 DEBUG nova.objects.instance [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'migration_context' on Instance uuid ce0d1f7f-07e0-4273-b161-871f7fd65015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.838 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.838 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.839 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.854 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.908 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.909 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.910 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.920 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.977 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:56 compute-0 nova_compute[185474]: 2026-01-05 14:59:56.978 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.022 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.eph0 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.023 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.024 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.124 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.125 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.126 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Ensure instance console log exists: /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.126 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.127 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.127 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.129 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:59:42Z,direct_url=<?>,disk_format='qcow2',id=91284341-6111-4ea9-b089-0808c57a7892,min_disk=0,min_ram=0,name='fvt_testing_image',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:59:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': '91284341-6111-4ea9-b089-0808c57a7892'}], 'ephemerals': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'encryption_options': None, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.139 185478 WARNING nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.145 185478 DEBUG nova.virt.libvirt.host [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.146 185478 DEBUG nova.virt.libvirt.host [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.151 185478 DEBUG nova.virt.libvirt.host [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.151 185478 DEBUG nova.virt.libvirt.host [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.152 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.152 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T14:59:49Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='f27aaf85-dc78-4018-80f8-874b5fd76062',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T14:59:42Z,direct_url=<?>,disk_format='qcow2',id=91284341-6111-4ea9-b089-0808c57a7892,min_disk=0,min_ram=0,name='fvt_testing_image',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T14:59:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.153 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.153 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.153 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.154 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.154 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.154 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.155 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.155 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.155 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.156 185478 DEBUG nova.virt.hardware [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.159 185478 DEBUG nova.objects.instance [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce0d1f7f-07e0-4273-b161-871f7fd65015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.173 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] End _get_guest_xml xml=<domain type="kvm">
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <uuid>ce0d1f7f-07e0-4273-b161-871f7fd65015</uuid>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <name>instance-00000005</name>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <memory>524288</memory>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <metadata>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:name>fvt_testing_server</nova:name>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 14:59:57</nova:creationTime>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:flavor name="fvt_testing_flavor">
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:memory>512</nova:memory>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:user uuid="4c0cf318026a40748762c9e05cd1efe0">admin</nova:user>
Jan 05 14:59:57 compute-0 nova_compute[185474]:         <nova:project uuid="54417029b2fb4b749e20754214013802">admin</nova:project>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="91284341-6111-4ea9-b089-0808c57a7892"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <nova:ports/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </metadata>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <system>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="serial">ce0d1f7f-07e0-4273-b161-871f7fd65015</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="uuid">ce0d1f7f-07e0-4273-b161-871f7fd65015</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </system>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <os>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </os>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <features>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <apic/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </features>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </clock>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </cpu>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   <devices>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.eph0"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <target dev="vdb" bus="virtio"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.config"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </disk>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/console.log" append="off"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </serial>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <video>
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </video>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </rng>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 14:59:57 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 14:59:57 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 14:59:57 compute-0 nova_compute[185474]:   </devices>
Jan 05 14:59:57 compute-0 nova_compute[185474]: </domain>
Jan 05 14:59:57 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.233 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.233 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.233 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.234 185478 INFO nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Using config drive
Jan 05 14:59:57 compute-0 podman[245778]: 2026-01-05 14:59:57.617048145 +0000 UTC m=+0.089566373 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 14:59:57 compute-0 podman[245777]: 2026-01-05 14:59:57.626797189 +0000 UTC m=+0.102782341 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.645 185478 INFO nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Creating config drive at /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.config
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.654 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7uukbxgj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 14:59:57 compute-0 nova_compute[185474]: 2026-01-05 14:59:57.794 185478 DEBUG oslo_concurrency.processutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7uukbxgj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 14:59:57 compute-0 systemd-machined[156786]: New machine qemu-5-instance-00000005.
Jan 05 14:59:57 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.409 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625198.4092724, ce0d1f7f-07e0-4273-b161-871f7fd65015 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.410 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] VM Resumed (Lifecycle Event)
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.414 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.414 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.420 185478 INFO nova.virt.libvirt.driver [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Instance spawned successfully.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.421 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.436 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.448 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.452 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.452 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.453 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.453 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.453 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.454 185478 DEBUG nova.virt.libvirt.driver [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.495 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.496 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625198.4138293, ce0d1f7f-07e0-4273-b161-871f7fd65015 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.496 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] VM Started (Lifecycle Event)
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.519 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.526 185478 INFO nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Took 3.68 seconds to spawn the instance on the hypervisor.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.527 185478 DEBUG nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.528 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.549 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.594 185478 INFO nova.compute.manager [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Took 4.19 seconds to build instance.
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.615 185478 DEBUG oslo_concurrency.lockutils [None req-65a052e8-9275-43d2-ae04-99103a72cec6 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 14:59:58 compute-0 nova_compute[185474]: 2026-01-05 14:59:58.981 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 14:59:59 compute-0 podman[201880]: time="2026-01-05T14:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 14:59:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 14:59:59 compute-0 podman[201880]: @ - - [05/Jan/2026:14:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 05 15:00:00 compute-0 nova_compute[185474]: 2026-01-05 15:00:00.656 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:00 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 15:00:00 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 15:00:01 compute-0 openstack_network_exporter[205179]: ERROR   15:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:00:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:00:01 compute-0 openstack_network_exporter[205179]: ERROR   15:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:00:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:00:03 compute-0 nova_compute[185474]: 2026-01-05 15:00:03.992 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:04 compute-0 podman[245870]: 2026-01-05 15:00:04.663328045 +0000 UTC m=+0.125760435 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:00:04 compute-0 podman[245869]: 2026-01-05 15:00:04.678091815 +0000 UTC m=+0.144803151 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 15:00:05 compute-0 nova_compute[185474]: 2026-01-05 15:00:05.659 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:07 compute-0 podman[245912]: 2026-01-05 15:00:07.60110898 +0000 UTC m=+0.090103786 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-type=git, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Jan 05 15:00:08 compute-0 nova_compute[185474]: 2026-01-05 15:00:08.991 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:10 compute-0 nova_compute[185474]: 2026-01-05 15:00:10.662 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.852 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.853 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.854 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "ce0d1f7f-07e0-4273-b161-871f7fd65015-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.854 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.855 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.857 185478 INFO nova.compute.manager [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Terminating instance
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.859 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "refresh_cache-ce0d1f7f-07e0-4273-b161-871f7fd65015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.860 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquired lock "refresh_cache-ce0d1f7f-07e0-4273-b161-871f7fd65015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:00:12 compute-0 nova_compute[185474]: 2026-01-05 15:00:12.860 185478 DEBUG nova.network.neutron [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:00:13 compute-0 nova_compute[185474]: 2026-01-05 15:00:13.341 185478 DEBUG nova.network.neutron [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:00:13 compute-0 nova_compute[185474]: 2026-01-05 15:00:13.903 185478 DEBUG nova.network.neutron [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:00:13 compute-0 nova_compute[185474]: 2026-01-05 15:00:13.928 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Releasing lock "refresh_cache-ce0d1f7f-07e0-4273-b161-871f7fd65015" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:00:13 compute-0 nova_compute[185474]: 2026-01-05 15:00:13.930 185478 DEBUG nova.compute.manager [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:00:13 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 05 15:00:13 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.385s CPU time.
Jan 05 15:00:13 compute-0 nova_compute[185474]: 2026-01-05 15:00:13.994 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:14 compute-0 systemd-machined[156786]: Machine qemu-5-instance-00000005 terminated.
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.228 185478 INFO nova.virt.libvirt.driver [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Instance destroyed successfully.
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.231 185478 DEBUG nova.objects.instance [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'resources' on Instance uuid ce0d1f7f-07e0-4273-b161-871f7fd65015 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.253 185478 INFO nova.virt.libvirt.driver [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Deleting instance files /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015_del
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.254 185478 INFO nova.virt.libvirt.driver [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Deletion of /var/lib/nova/instances/ce0d1f7f-07e0-4273-b161-871f7fd65015_del complete
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.318 185478 INFO nova.compute.manager [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.319 185478 DEBUG oslo.service.loopingcall [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.319 185478 DEBUG nova.compute.manager [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.319 185478 DEBUG nova.network.neutron [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.654 185478 DEBUG nova.network.neutron [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.670 185478 DEBUG nova.network.neutron [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.695 185478 INFO nova.compute.manager [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Took 0.38 seconds to deallocate network for instance.
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.866 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:14 compute-0 nova_compute[185474]: 2026-01-05 15:00:14.867 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.017 185478 DEBUG nova.compute.provider_tree [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.056 185478 DEBUG nova.scheduler.client.report [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.089 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.114 185478 INFO nova.scheduler.client.report [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Deleted allocations for instance ce0d1f7f-07e0-4273-b161-871f7fd65015
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.212 185478 DEBUG oslo_concurrency.lockutils [None req-85511ac7-2098-4113-b8d9-5a8f219ec14d 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "ce0d1f7f-07e0-4273-b161-871f7fd65015" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:15 compute-0 podman[245944]: 2026-01-05 15:00:15.636427168 +0000 UTC m=+0.119004111 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Jan 05 15:00:15 compute-0 nova_compute[185474]: 2026-01-05 15:00:15.666 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:18 compute-0 nova_compute[185474]: 2026-01-05 15:00:18.998 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:20 compute-0 nova_compute[185474]: 2026-01-05 15:00:20.669 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:23 compute-0 podman[245965]: 2026-01-05 15:00:23.654676741 +0000 UTC m=+0.120977415 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, version=9.6, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 15:00:24 compute-0 nova_compute[185474]: 2026-01-05 15:00:24.003 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:25 compute-0 nova_compute[185474]: 2026-01-05 15:00:25.671 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:26 compute-0 podman[245986]: 2026-01-05 15:00:26.70197176 +0000 UTC m=+0.172489673 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 15:00:28 compute-0 podman[246011]: 2026-01-05 15:00:28.59811502 +0000 UTC m=+0.086714345 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:00:28 compute-0 podman[246012]: 2026-01-05 15:00:28.599499728 +0000 UTC m=+0.085626475 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:00:29 compute-0 nova_compute[185474]: 2026-01-05 15:00:29.005 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:29 compute-0 nova_compute[185474]: 2026-01-05 15:00:29.223 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625214.2218783, ce0d1f7f-07e0-4273-b161-871f7fd65015 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:00:29 compute-0 nova_compute[185474]: 2026-01-05 15:00:29.224 185478 INFO nova.compute.manager [-] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] VM Stopped (Lifecycle Event)
Jan 05 15:00:29 compute-0 podman[201880]: time="2026-01-05T15:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:00:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:00:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4386 "" "Go-http-client/1.1"
Jan 05 15:00:29 compute-0 nova_compute[185474]: 2026-01-05 15:00:29.994 185478 DEBUG nova.compute.manager [None req-05722e0e-5f96-4188-a59c-d727daffd9e3 - - - - - -] [instance: ce0d1f7f-07e0-4273-b161-871f7fd65015] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:00:30 compute-0 nova_compute[185474]: 2026-01-05 15:00:30.674 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:31 compute-0 openstack_network_exporter[205179]: ERROR   15:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:00:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:00:31 compute-0 openstack_network_exporter[205179]: ERROR   15:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:00:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:00:34 compute-0 nova_compute[185474]: 2026-01-05 15:00:34.008 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:35 compute-0 podman[246055]: 2026-01-05 15:00:35.653174354 +0000 UTC m=+0.122169017 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 15:00:35 compute-0 podman[246054]: 2026-01-05 15:00:35.672887299 +0000 UTC m=+0.156198830 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 15:00:35 compute-0 nova_compute[185474]: 2026-01-05 15:00:35.677 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.754 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.755 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb55764e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.762 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'name': 'vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.765 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.766 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.766 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.766 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T15:00:37.766643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.854 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 1385624795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.855 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 14233900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.856 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.940 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.940 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 464426220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 74874753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 83046078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.942 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T15:00:37.941982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.944 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.945 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.945 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.945 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.945 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T15:00:37.944248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T15:00:37.946381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.968 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.968 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.968 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 sshd-session[245423]: Received disconnect from 38.102.83.65 port 43898:11: disconnected by user
Jan 05 15:00:37 compute-0 sshd-session[245423]: Disconnected from user zuul 38.102.83.65 port 43898
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.992 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 sshd-session[245409]: pam_unix(sshd:session): session closed for user zuul
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.992 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.992 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.993 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T15:00:37.993628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:37 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 05 15:00:37 compute-0 systemd[1]: session-30.scope: Consumed 1.458s CPU time.
Jan 05 15:00:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:37.997 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:37 compute-0 systemd-logind[795]: Session 30 logged out. Waiting for processes to exit.
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.001 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.002 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.002 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T15:00:38.003019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.003 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.003 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.004 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.004 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.004 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.005 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T15:00:38.006125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T15:00:38.007762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.008 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.008 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.008 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.009 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.009 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.009 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T15:00:38.010772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.011 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.011 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.011 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.012 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.012 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.012 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T15:00:38.013731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 systemd-logind[795]: Removed session 30.
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.038 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/cpu volume: 34700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.058 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 47210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T15:00:38.059566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.060 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T15:00:38.060878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T15:00:38.062377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T15:00:38.063687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T15:00:38.064963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.065 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.065 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T15:00:38.066417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.066 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.068 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.068 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T15:00:38.067909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T15:00:38.069346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.071 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T15:00:38.070820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.071 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T15:00:38.072259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.072 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T15:00:38.073614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.075 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.075 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T15:00:38.075052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.075 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.076 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.076 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.076 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T15:00:38.077185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.077 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T15:00:38.078647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.079 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.080 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.081 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:00:38.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:00:38 compute-0 podman[246096]: 2026-01-05 15:00:38.126471239 +0000 UTC m=+0.089378148 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, release=1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, container_name=kepler)
Jan 05 15:00:39 compute-0 nova_compute[185474]: 2026-01-05 15:00:39.009 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:40 compute-0 nova_compute[185474]: 2026-01-05 15:00:40.679 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:44 compute-0 nova_compute[185474]: 2026-01-05 15:00:44.013 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:00:44.818 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:00:44.818 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:00:44.819 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:45 compute-0 nova_compute[185474]: 2026-01-05 15:00:45.681 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:46 compute-0 podman[246116]: 2026-01-05 15:00:46.609793779 +0000 UTC m=+0.083805656 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.457 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.458 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.459 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.460 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.855 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.960 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:47 compute-0 nova_compute[185474]: 2026-01-05 15:00:47.961 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.020 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.021 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.081 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.083 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.150 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.157 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.231 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.233 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.302 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.303 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.366 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.368 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.429 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.839 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.840 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4928MB free_disk=72.37139892578125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.841 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:00:48 compute-0 nova_compute[185474]: 2026-01-05 15:00:48.841 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.015 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.071 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.072 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.073 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.074 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.176 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.288 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.288 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.302 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.475 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.544 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.568 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.594 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:00:49 compute-0 nova_compute[185474]: 2026-01-05 15:00:49.595 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:50 compute-0 nova_compute[185474]: 2026-01-05 15:00:50.684 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:51 compute-0 nova_compute[185474]: 2026-01-05 15:00:51.415 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:51 compute-0 nova_compute[185474]: 2026-01-05 15:00:51.415 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:00:51 compute-0 nova_compute[185474]: 2026-01-05 15:00:51.415 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:00:52 compute-0 nova_compute[185474]: 2026-01-05 15:00:52.687 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:00:52 compute-0 nova_compute[185474]: 2026-01-05 15:00:52.688 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:00:52 compute-0 nova_compute[185474]: 2026-01-05 15:00:52.689 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:00:52 compute-0 nova_compute[185474]: 2026-01-05 15:00:52.690 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:00:52 compute-0 sshd-session[246161]: Accepted publickey for zuul from 38.102.83.65 port 59310 ssh2: RSA SHA256:J8z/B181hdplgLZFhp0hXyUBZUpMLnoe/Gt2JPtUKmM
Jan 05 15:00:52 compute-0 systemd-logind[795]: New session 31 of user zuul.
Jan 05 15:00:52 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 05 15:00:52 compute-0 sshd-session[246161]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 15:00:53 compute-0 sudo[246338]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjidebbdorhiezjlkbdlfsbbuukzcuk ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767625252.929273-60192-183415838597027/AnsiballZ_command.py'
Jan 05 15:00:53 compute-0 sudo[246338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 15:00:53 compute-0 python3[246340]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 15:00:53 compute-0 sudo[246338]: pam_unix(sudo:session): session closed for user root
Jan 05 15:00:54 compute-0 nova_compute[185474]: 2026-01-05 15:00:54.018 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:54 compute-0 podman[246380]: 2026-01-05 15:00:54.599537395 +0000 UTC m=+0.085155913 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 15:00:54 compute-0 nova_compute[185474]: 2026-01-05 15:00:54.891 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:00:54 compute-0 nova_compute[185474]: 2026-01-05 15:00:54.909 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:00:54 compute-0 nova_compute[185474]: 2026-01-05 15:00:54.909 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:00:54 compute-0 nova_compute[185474]: 2026-01-05 15:00:54.910 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:55 compute-0 nova_compute[185474]: 2026-01-05 15:00:55.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:55 compute-0 nova_compute[185474]: 2026-01-05 15:00:55.687 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:57 compute-0 podman[246400]: 2026-01-05 15:00:57.708098865 +0000 UTC m=+0.189402821 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:00:59 compute-0 nova_compute[185474]: 2026-01-05 15:00:59.020 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:00:59 compute-0 nova_compute[185474]: 2026-01-05 15:00:59.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:00:59 compute-0 nova_compute[185474]: 2026-01-05 15:00:59.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 15:00:59 compute-0 nova_compute[185474]: 2026-01-05 15:00:59.456 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 15:00:59 compute-0 podman[246424]: 2026-01-05 15:00:59.587301587 +0000 UTC m=+0.075661855 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:00:59 compute-0 podman[246425]: 2026-01-05 15:00:59.614309131 +0000 UTC m=+0.086521070 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 15:00:59 compute-0 podman[201880]: time="2026-01-05T15:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:00:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:00:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 05 15:01:00 compute-0 nova_compute[185474]: 2026-01-05 15:01:00.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:00 compute-0 nova_compute[185474]: 2026-01-05 15:01:00.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 15:01:00 compute-0 nova_compute[185474]: 2026-01-05 15:01:00.690 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:01 compute-0 CROND[246516]: (root) CMD (run-parts /etc/cron.hourly)
Jan 05 15:01:01 compute-0 run-parts[246524]: (/etc/cron.hourly) starting 0anacron
Jan 05 15:01:01 compute-0 run-parts[246533]: (/etc/cron.hourly) finished 0anacron
Jan 05 15:01:01 compute-0 CROND[246515]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 05 15:01:01 compute-0 openstack_network_exporter[205179]: ERROR   15:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:01:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:01:01 compute-0 openstack_network_exporter[205179]: ERROR   15:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:01:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:01:01 compute-0 sudo[246647]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siqcnsivbhtrvqpdjqudkqoiofhegisn ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767625261.1094944-60355-184148678136637/AnsiballZ_command.py'
Jan 05 15:01:01 compute-0 sudo[246647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 15:01:01 compute-0 python3[246649]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 15:01:02 compute-0 sudo[246647]: pam_unix(sudo:session): session closed for user root
Jan 05 15:01:04 compute-0 nova_compute[185474]: 2026-01-05 15:01:04.024 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:05 compute-0 nova_compute[185474]: 2026-01-05 15:01:05.692 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:06 compute-0 podman[246688]: 2026-01-05 15:01:06.62881353 +0000 UTC m=+0.092653606 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 15:01:06 compute-0 podman[246689]: 2026-01-05 15:01:06.654371693 +0000 UTC m=+0.120718768 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 15:01:08 compute-0 podman[246730]: 2026-01-05 15:01:08.638086471 +0000 UTC m=+0.130614447 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-type=git, release-0.7.12=, version=9.4, architecture=x86_64, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 05 15:01:09 compute-0 nova_compute[185474]: 2026-01-05 15:01:09.027 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:10 compute-0 nova_compute[185474]: 2026-01-05 15:01:10.696 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:11 compute-0 sudo[246921]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdccbqphfoonfihiqsuzwsjwrzkqbabi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767625270.8985393-60519-189781618818422/AnsiballZ_command.py'
Jan 05 15:01:11 compute-0 sudo[246921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 15:01:12 compute-0 python3[246923]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 15:01:12 compute-0 sudo[246921]: pam_unix(sudo:session): session closed for user root
Jan 05 15:01:14 compute-0 nova_compute[185474]: 2026-01-05 15:01:14.030 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:15 compute-0 nova_compute[185474]: 2026-01-05 15:01:15.699 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:17 compute-0 podman[246962]: 2026-01-05 15:01:17.744510295 +0000 UTC m=+0.218737929 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 15:01:19 compute-0 nova_compute[185474]: 2026-01-05 15:01:19.033 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:20 compute-0 nova_compute[185474]: 2026-01-05 15:01:20.702 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:24 compute-0 nova_compute[185474]: 2026-01-05 15:01:24.036 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:25 compute-0 podman[246981]: 2026-01-05 15:01:25.645682603 +0000 UTC m=+0.116035590 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64)
Jan 05 15:01:25 compute-0 nova_compute[185474]: 2026-01-05 15:01:25.705 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:27 compute-0 sudo[247173]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbioboribnovstknohxrsmgkmiqqize ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767625286.352319-60736-113540952175410/AnsiballZ_command.py'
Jan 05 15:01:27 compute-0 sudo[247173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 15:01:27 compute-0 python3[247175]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 15:01:27 compute-0 sudo[247173]: pam_unix(sudo:session): session closed for user root
Jan 05 15:01:28 compute-0 podman[247214]: 2026-01-05 15:01:28.677401886 +0000 UTC m=+0.153168688 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 05 15:01:29 compute-0 nova_compute[185474]: 2026-01-05 15:01:29.039 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:29 compute-0 podman[201880]: time="2026-01-05T15:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:01:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:01:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 05 15:01:30 compute-0 podman[247240]: 2026-01-05 15:01:30.602969866 +0000 UTC m=+0.076916380 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 15:01:30 compute-0 podman[247239]: 2026-01-05 15:01:30.623536784 +0000 UTC m=+0.107163970 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 15:01:30 compute-0 nova_compute[185474]: 2026-01-05 15:01:30.707 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:31 compute-0 openstack_network_exporter[205179]: ERROR   15:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:01:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:01:31 compute-0 openstack_network_exporter[205179]: ERROR   15:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:01:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:01:34 compute-0 nova_compute[185474]: 2026-01-05 15:01:34.042 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:35 compute-0 nova_compute[185474]: 2026-01-05 15:01:35.711 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:37 compute-0 podman[247280]: 2026-01-05 15:01:37.653518909 +0000 UTC m=+0.122351321 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:01:37 compute-0 podman[247279]: 2026-01-05 15:01:37.66900081 +0000 UTC m=+0.141942124 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 05 15:01:39 compute-0 nova_compute[185474]: 2026-01-05 15:01:39.046 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:39 compute-0 podman[247319]: 2026-01-05 15:01:39.613923625 +0000 UTC m=+0.097764775 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, config_id=kepler, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Jan 05 15:01:40 compute-0 nova_compute[185474]: 2026-01-05 15:01:40.715 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:44 compute-0 nova_compute[185474]: 2026-01-05 15:01:44.048 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:01:44.818 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:01:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:01:44.819 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:01:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:01:44.819 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:01:45 compute-0 nova_compute[185474]: 2026-01-05 15:01:45.718 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.615 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.655 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.656 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.656 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.657 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.746 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.847 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.849 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.953 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:47 compute-0 nova_compute[185474]: 2026-01-05 15:01:47.954 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.022 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.023 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.087 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.099 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.185 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.186 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.268 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.270 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.332 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.333 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.389 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:01:48 compute-0 podman[247363]: 2026-01-05 15:01:48.655182855 +0000 UTC m=+0.136141956 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.793 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.795 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4884MB free_disk=72.37139892578125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.795 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:01:48 compute-0 nova_compute[185474]: 2026-01-05 15:01:48.796 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.026 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.053 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.182 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.270 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.273 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:01:49 compute-0 nova_compute[185474]: 2026-01-05 15:01:49.273 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:01:50 compute-0 nova_compute[185474]: 2026-01-05 15:01:50.721 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:51 compute-0 nova_compute[185474]: 2026-01-05 15:01:51.056 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:51 compute-0 nova_compute[185474]: 2026-01-05 15:01:51.058 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:51 compute-0 nova_compute[185474]: 2026-01-05 15:01:51.058 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:01:51 compute-0 nova_compute[185474]: 2026-01-05 15:01:51.396 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:52 compute-0 nova_compute[185474]: 2026-01-05 15:01:52.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:52 compute-0 nova_compute[185474]: 2026-01-05 15:01:52.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:52 compute-0 nova_compute[185474]: 2026-01-05 15:01:52.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:53 compute-0 nova_compute[185474]: 2026-01-05 15:01:53.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:01:53 compute-0 nova_compute[185474]: 2026-01-05 15:01:53.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:01:53 compute-0 nova_compute[185474]: 2026-01-05 15:01:53.660 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:01:53 compute-0 nova_compute[185474]: 2026-01-05 15:01:53.661 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:01:53 compute-0 nova_compute[185474]: 2026-01-05 15:01:53.662 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:01:54 compute-0 nova_compute[185474]: 2026-01-05 15:01:54.057 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:55 compute-0 nova_compute[185474]: 2026-01-05 15:01:55.727 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:56 compute-0 podman[247384]: 2026-01-05 15:01:56.653012097 +0000 UTC m=+0.129414015 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=)
Jan 05 15:01:59 compute-0 nova_compute[185474]: 2026-01-05 15:01:59.062 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:01:59 compute-0 podman[247405]: 2026-01-05 15:01:59.668134846 +0000 UTC m=+0.144894154 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Jan 05 15:01:59 compute-0 podman[201880]: time="2026-01-05T15:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:01:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:01:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4378 "" "Go-http-client/1.1"
Jan 05 15:02:00 compute-0 sshd-session[247431]: Invalid user sol from 165.22.168.95 port 39952
Jan 05 15:02:00 compute-0 sshd-session[247431]: Connection closed by invalid user sol 165.22.168.95 port 39952 [preauth]
Jan 05 15:02:00 compute-0 nova_compute[185474]: 2026-01-05 15:02:00.731 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:01 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 15:02:01 compute-0 podman[247434]: 2026-01-05 15:02:01.205863897 +0000 UTC m=+0.087327781 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:02:01 compute-0 podman[247435]: 2026-01-05 15:02:01.23176681 +0000 UTC m=+0.102915435 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:02:01 compute-0 openstack_network_exporter[205179]: ERROR   15:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:02:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:02:01 compute-0 openstack_network_exporter[205179]: ERROR   15:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:02:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:02:01 compute-0 nova_compute[185474]: 2026-01-05 15:02:01.913 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [{"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:02:01 compute-0 nova_compute[185474]: 2026-01-05 15:02:01.938 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:02:01 compute-0 nova_compute[185474]: 2026-01-05 15:02:01.939 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:02:01 compute-0 nova_compute[185474]: 2026-01-05 15:02:01.940 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:02 compute-0 nova_compute[185474]: 2026-01-05 15:02:02.936 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:04 compute-0 nova_compute[185474]: 2026-01-05 15:02:04.065 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:05 compute-0 nova_compute[185474]: 2026-01-05 15:02:05.734 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:08 compute-0 podman[247475]: 2026-01-05 15:02:08.647490293 +0000 UTC m=+0.122312321 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 15:02:08 compute-0 podman[247474]: 2026-01-05 15:02:08.667612899 +0000 UTC m=+0.149818677 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 05 15:02:09 compute-0 nova_compute[185474]: 2026-01-05 15:02:09.068 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:10 compute-0 podman[247515]: 2026-01-05 15:02:10.634732086 +0000 UTC m=+0.109934006 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, container_name=kepler, release-0.7.12=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 05 15:02:10 compute-0 nova_compute[185474]: 2026-01-05 15:02:10.738 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:14 compute-0 nova_compute[185474]: 2026-01-05 15:02:14.070 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:15 compute-0 nova_compute[185474]: 2026-01-05 15:02:15.740 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:19 compute-0 nova_compute[185474]: 2026-01-05 15:02:19.074 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:19 compute-0 podman[247535]: 2026-01-05 15:02:19.642262164 +0000 UTC m=+0.124496420 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 15:02:20 compute-0 nova_compute[185474]: 2026-01-05 15:02:20.743 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:24 compute-0 nova_compute[185474]: 2026-01-05 15:02:24.078 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:25 compute-0 nova_compute[185474]: 2026-01-05 15:02:25.746 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:27 compute-0 sshd-session[246164]: Received disconnect from 38.102.83.65 port 59310:11: disconnected by user
Jan 05 15:02:27 compute-0 sshd-session[246164]: Disconnected from user zuul 38.102.83.65 port 59310
Jan 05 15:02:27 compute-0 sshd-session[246161]: pam_unix(sshd:session): session closed for user zuul
Jan 05 15:02:27 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 05 15:02:27 compute-0 systemd[1]: session-31.scope: Consumed 4.795s CPU time.
Jan 05 15:02:27 compute-0 systemd-logind[795]: Session 31 logged out. Waiting for processes to exit.
Jan 05 15:02:27 compute-0 systemd-logind[795]: Removed session 31.
Jan 05 15:02:27 compute-0 podman[247554]: 2026-01-05 15:02:27.199572204 +0000 UTC m=+0.120401969 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, version=9.6, config_id=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 05 15:02:29 compute-0 nova_compute[185474]: 2026-01-05 15:02:29.081 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:29 compute-0 podman[201880]: time="2026-01-05T15:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:02:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:02:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 05 15:02:30 compute-0 podman[247574]: 2026-01-05 15:02:30.684169807 +0000 UTC m=+0.157464746 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 15:02:30 compute-0 nova_compute[185474]: 2026-01-05 15:02:30.748 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:31 compute-0 openstack_network_exporter[205179]: ERROR   15:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:02:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:02:31 compute-0 openstack_network_exporter[205179]: ERROR   15:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:02:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:02:31 compute-0 podman[247597]: 2026-01-05 15:02:31.636564979 +0000 UTC m=+0.112456585 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:02:31 compute-0 podman[247598]: 2026-01-05 15:02:31.644678988 +0000 UTC m=+0.114772305 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 15:02:34 compute-0 nova_compute[185474]: 2026-01-05 15:02:34.084 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:35 compute-0 nova_compute[185474]: 2026-01-05 15:02:35.752 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.755 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.755 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.755 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6712840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.766 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'name': 'vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {'metering.server_group': 'fb98dcdd-a12e-44ca-97ca-fe43134a3faa'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.769 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'name': 'test_0', 'flavor': {'id': 'afe04c80-f0ab-417e-844c-b5b05cc96b17', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '22e54d95-dd91-4f66-a65f-ce9984e648dc'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '54417029b2fb4b749e20754214013802', 'user_id': '4c0cf318026a40748762c9e05cd1efe0', 'hostId': '35f27b91af29db450050b00440256ac89bcb62a75cf7028f4bf42ecc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.770 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.770 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.770 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T15:02:37.770420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.908 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 1385624795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.910 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 14233900 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:37.910 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.026 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 1728689582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.027 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 18915144 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.027 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.028 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.029 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.029 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.029 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 464426220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.030 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 74874753 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T15:02:38.029488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.030 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.latency volume: 83046078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.031 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 396012509 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.031 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 113701999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.032 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.latency volume: 62657112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.033 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.034 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T15:02:38.033907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.034 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.034 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.035 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.035 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.036 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.036 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T15:02:38.038268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.071 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.071 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.072 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.101 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.101 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.102 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.103 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.104 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.104 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.104 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T15:02:38.104725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.110 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.114 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.116 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.116 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.116 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.116 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 41832448 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.117 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.117 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T15:02:38.115824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.119 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.119 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T15:02:38.118336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T15:02:38.119096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.121 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.122 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.122 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T15:02:38.121501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.122 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.122 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T15:02:38.123609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.164 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/cpu volume: 36450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.187 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/cpu volume: 48910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.189 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T15:02:38.189071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.189 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.190 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T15:02:38.190775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.191 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.191 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T15:02:38.192468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T15:02:38.193834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.194 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.194 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T15:02:38.195234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.195 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.196 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T15:02:38.196591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T15:02:38.197992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.198 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.198 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T15:02:38.199446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.199 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.201 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T15:02:38.200833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.201 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.201 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T15:02:38.202259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.202 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/memory.usage volume: 48.92578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T15:02:38.203663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/memory.usage volume: 48.7578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T15:02:38.205043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.205 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.205 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.205 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.206 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.206 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.206 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.206 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T15:02:38.207233) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.207 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T15:02:38.208646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 14 DEBUG ceilometer.compute.pollsters [-] bf9485c0-8711-436a-aad0-658ecba71329/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.209 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.210 14 DEBUG ceilometer.compute.pollsters [-] 731f6e65-e951-4af3-aaf3-0322c02b154c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.210 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.211 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.212 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.213 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.214 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:02:38.215 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:02:39 compute-0 nova_compute[185474]: 2026-01-05 15:02:39.087 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:39 compute-0 podman[247638]: 2026-01-05 15:02:39.215453743 +0000 UTC m=+0.087066974 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 15:02:39 compute-0 podman[247637]: 2026-01-05 15:02:39.237707107 +0000 UTC m=+0.108543487 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 05 15:02:40 compute-0 nova_compute[185474]: 2026-01-05 15:02:40.755 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:41 compute-0 podman[247677]: 2026-01-05 15:02:41.617351942 +0000 UTC m=+0.101240689 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Jan 05 15:02:44 compute-0 nova_compute[185474]: 2026-01-05 15:02:44.088 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:02:44.820 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:02:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:02:44.821 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:02:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:02:44.822 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:02:45 compute-0 nova_compute[185474]: 2026-01-05 15:02:45.759 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:48 compute-0 nova_compute[185474]: 2026-01-05 15:02:48.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:48 compute-0 nova_compute[185474]: 2026-01-05 15:02:48.800 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:02:48 compute-0 nova_compute[185474]: 2026-01-05 15:02:48.801 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:02:48 compute-0 nova_compute[185474]: 2026-01-05 15:02:48.801 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:02:48 compute-0 nova_compute[185474]: 2026-01-05 15:02:48.802 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.090 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.113 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.169 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.170 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.228 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.229 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.293 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.295 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.362 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.369 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.429 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.431 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.502 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.504 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.565 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.567 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:02:49 compute-0 nova_compute[185474]: 2026-01-05 15:02:49.627 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.013 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.014 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4827MB free_disk=72.37139892578125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.015 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.015 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.106 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.107 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance bf9485c0-8711-436a-aad0-658ecba71329 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.107 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.108 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.193 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.209 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.211 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.212 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:02:50 compute-0 podman[247722]: 2026-01-05 15:02:50.658382096 +0000 UTC m=+0.143130066 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 15:02:50 compute-0 nova_compute[185474]: 2026-01-05 15:02:50.763 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:51 compute-0 nova_compute[185474]: 2026-01-05 15:02:51.212 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:51 compute-0 nova_compute[185474]: 2026-01-05 15:02:51.214 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:51 compute-0 nova_compute[185474]: 2026-01-05 15:02:51.215 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:02:51 compute-0 nova_compute[185474]: 2026-01-05 15:02:51.396 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:52 compute-0 nova_compute[185474]: 2026-01-05 15:02:52.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:52 compute-0 nova_compute[185474]: 2026-01-05 15:02:52.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:54 compute-0 nova_compute[185474]: 2026-01-05 15:02:54.092 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:54 compute-0 nova_compute[185474]: 2026-01-05 15:02:54.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:02:54 compute-0 nova_compute[185474]: 2026-01-05 15:02:54.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:02:54 compute-0 nova_compute[185474]: 2026-01-05 15:02:54.401 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:02:55 compute-0 nova_compute[185474]: 2026-01-05 15:02:55.766 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:55 compute-0 nova_compute[185474]: 2026-01-05 15:02:55.802 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:02:55 compute-0 nova_compute[185474]: 2026-01-05 15:02:55.803 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:02:55 compute-0 nova_compute[185474]: 2026-01-05 15:02:55.804 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:02:55 compute-0 nova_compute[185474]: 2026-01-05 15:02:55.805 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:02:57 compute-0 podman[247741]: 2026-01-05 15:02:57.654744319 +0000 UTC m=+0.126601487 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Jan 05 15:02:59 compute-0 nova_compute[185474]: 2026-01-05 15:02:59.095 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:02:59 compute-0 podman[201880]: time="2026-01-05T15:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:02:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:02:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4383 "" "Go-http-client/1.1"
Jan 05 15:03:00 compute-0 nova_compute[185474]: 2026-01-05 15:03:00.769 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:01 compute-0 openstack_network_exporter[205179]: ERROR   15:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:03:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:03:01 compute-0 openstack_network_exporter[205179]: ERROR   15:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:03:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:03:01 compute-0 podman[247761]: 2026-01-05 15:03:01.713014301 +0000 UTC m=+0.190994706 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 15:03:01 compute-0 podman[247786]: 2026-01-05 15:03:01.842788993 +0000 UTC m=+0.082178571 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 15:03:01 compute-0 podman[247787]: 2026-01-05 15:03:01.901911408 +0000 UTC m=+0.137349420 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 15:03:02 compute-0 nova_compute[185474]: 2026-01-05 15:03:02.320 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [{"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:03:02 compute-0 nova_compute[185474]: 2026-01-05 15:03:02.722 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-731f6e65-e951-4af3-aaf3-0322c02b154c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:03:02 compute-0 nova_compute[185474]: 2026-01-05 15:03:02.724 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:03:02 compute-0 nova_compute[185474]: 2026-01-05 15:03:02.725 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:02 compute-0 nova_compute[185474]: 2026-01-05 15:03:02.726 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:04 compute-0 nova_compute[185474]: 2026-01-05 15:03:04.097 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:05 compute-0 nova_compute[185474]: 2026-01-05 15:03:05.772 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:09 compute-0 nova_compute[185474]: 2026-01-05 15:03:09.101 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:09 compute-0 podman[247829]: 2026-01-05 15:03:09.647815822 +0000 UTC m=+0.128155039 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:03:09 compute-0 podman[247830]: 2026-01-05 15:03:09.650251448 +0000 UTC m=+0.124900821 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 15:03:10 compute-0 nova_compute[185474]: 2026-01-05 15:03:10.776 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:12 compute-0 podman[247871]: 2026-01-05 15:03:12.662229856 +0000 UTC m=+0.137473903 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, release=1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.expose-services=, release-0.7.12=, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64)
Jan 05 15:03:14 compute-0 nova_compute[185474]: 2026-01-05 15:03:14.105 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:15 compute-0 nova_compute[185474]: 2026-01-05 15:03:15.780 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:19 compute-0 nova_compute[185474]: 2026-01-05 15:03:19.107 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:20 compute-0 nova_compute[185474]: 2026-01-05 15:03:20.785 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:21 compute-0 podman[247889]: 2026-01-05 15:03:21.632701569 +0000 UTC m=+0.107198992 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 05 15:03:24 compute-0 nova_compute[185474]: 2026-01-05 15:03:24.110 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:25 compute-0 nova_compute[185474]: 2026-01-05 15:03:25.787 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:28 compute-0 podman[247908]: 2026-01-05 15:03:28.66840067 +0000 UTC m=+0.141659837 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter)
Jan 05 15:03:29 compute-0 nova_compute[185474]: 2026-01-05 15:03:29.112 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:29 compute-0 podman[201880]: time="2026-01-05T15:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:03:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 15:03:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4385 "" "Go-http-client/1.1"
Jan 05 15:03:30 compute-0 nova_compute[185474]: 2026-01-05 15:03:30.791 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:31 compute-0 openstack_network_exporter[205179]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:03:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:03:31 compute-0 openstack_network_exporter[205179]: ERROR   15:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:03:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:03:32 compute-0 podman[247928]: 2026-01-05 15:03:32.613554885 +0000 UTC m=+0.088093264 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:03:32 compute-0 podman[247929]: 2026-01-05 15:03:32.638256231 +0000 UTC m=+0.114498396 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 05 15:03:32 compute-0 podman[247930]: 2026-01-05 15:03:32.706882161 +0000 UTC m=+0.170130486 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:03:33 compute-0 sshd-session[247926]: Received disconnect from 80.94.93.119 port 16572:11:  [preauth]
Jan 05 15:03:33 compute-0 sshd-session[247926]: Disconnected from authenticating user root 80.94.93.119 port 16572 [preauth]
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.174 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.175 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.176 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.177 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.178 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.181 185478 INFO nova.compute.manager [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Terminating instance
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.183 185478 DEBUG nova.compute.manager [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:03:33 compute-0 kernel: tapadeb7ded-97 (unregistering): left promiscuous mode
Jan 05 15:03:33 compute-0 NetworkManager[56139]: <info>  [1767625413.2476] device (tapadeb7ded-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.261 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 ovn_controller[97763]: 2026-01-05T15:03:33Z|00058|binding|INFO|Releasing lport adeb7ded-97b9-4df8-bd1a-dbc14421a73f from this chassis (sb_readonly=0)
Jan 05 15:03:33 compute-0 ovn_controller[97763]: 2026-01-05T15:03:33Z|00059|binding|INFO|Setting lport adeb7ded-97b9-4df8-bd1a-dbc14421a73f down in Southbound
Jan 05 15:03:33 compute-0 ovn_controller[97763]: 2026-01-05T15:03:33Z|00060|binding|INFO|Removing iface tapadeb7ded-97 ovn-installed in OVS
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.266 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.270 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:7d:54 192.168.0.72'], port_security=['fa:16:3e:ef:7d:54 192.168.0.72'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-zgjawdmpyczt-acrgehsdshfx-zaln7rhtkf7p-port-vy562cz6xjpw', 'neutron:cidrs': '192.168.0.72/24', 'neutron:device_id': 'bf9485c0-8711-436a-aad0-658ecba71329', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-zgjawdmpyczt-acrgehsdshfx-zaln7rhtkf7p-port-vy562cz6xjpw', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=adeb7ded-97b9-4df8-bd1a-dbc14421a73f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.272 107222 INFO neutron.agent.ovn.metadata.agent [-] Port adeb7ded-97b9-4df8-bd1a-dbc14421a73f in datapath 905a1599-2980-4b24-9705-76e3c8a469ea unbound from our chassis
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.274 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 905a1599-2980-4b24-9705-76e3c8a469ea
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.291 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.295 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c843d1dd-2cb8-4a73-b2d4-7a4a6563fc22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 05 15:03:33 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 1.016s CPU time.
Jan 05 15:03:33 compute-0 systemd-machined[156786]: Machine qemu-4-instance-00000004 terminated.
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.325 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc414e1-01d5-4eb6-887c-c8a8856118d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.330 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[1545d37c-a16c-42b1-a30e-83e05b8eed52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.354 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b0549a-bfa8-4d7e-812f-aaeec4abd104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.372 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[103faffa-1509-497e-943a-e21134859e97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap905a1599-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:e4:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 15, 'rx_bytes': 658, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366227, 'reachable_time': 42078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248007, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.393 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7fbe13-e000-4d4d-a538-a886ce804560]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366246, 'tstamp': 366246}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248008, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap905a1599-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 366251, 'tstamp': 366251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248008, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.395 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.397 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.405 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.406 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905a1599-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.406 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.407 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap905a1599-20, col_values=(('external_ids', {'iface-id': 'add49293-6ad0-4684-b3cd-091b92792de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.407 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.489 185478 INFO nova.virt.libvirt.driver [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance destroyed successfully.
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.490 185478 DEBUG nova.objects.instance [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'resources' on Instance uuid bf9485c0-8711-436a-aad0-658ecba71329 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.504 185478 DEBUG nova.compute.manager [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-vif-unplugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.504 185478 DEBUG oslo_concurrency.lockutils [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.504 185478 DEBUG oslo_concurrency.lockutils [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.504 185478 DEBUG oslo_concurrency.lockutils [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.504 185478 DEBUG nova.compute.manager [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] No waiting events found dispatching network-vif-unplugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.505 185478 DEBUG nova.compute.manager [req-d5267931-f3b0-465d-88ed-292a66a70dd2 req-d76cede1-3de0-42d3-8103-937f00798178 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-vif-unplugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.506 185478 DEBUG nova.virt.libvirt.vif [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T14:53:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-dmpyczt-acrgehsdshfx-zaln7rhtkf7p-vnf-bpiq3earxdjj',id=4,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T14:53:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='fb98dcdd-a12e-44ca-97ca-fe43134a3faa'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-yoo0u7c7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T14:53:17Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 15:03:33 compute-0 nova_compute[185474]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTM3ODI5MzU0NTk5Mjg2ODcyNj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEzNzgyOTM1NDU5OTI4Njg3MjY9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMzc4MjkzNTQ1OTkyODY4NzI2PT0tLQo=',user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=bf9485c0-8711-436a-aad0-658ecba71329,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.506 185478 DEBUG nova.network.os_vif_util [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "address": "fa:16:3e:ef:7d:54", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.72", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapadeb7ded-97", "ovs_interfaceid": "adeb7ded-97b9-4df8-bd1a-dbc14421a73f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.507 185478 DEBUG nova.network.os_vif_util [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.507 185478 DEBUG os_vif [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.508 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.508 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapadeb7ded-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.510 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.513 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.516 185478 INFO os_vif [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:7d:54,bridge_name='br-int',has_traffic_filtering=True,id=adeb7ded-97b9-4df8-bd1a-dbc14421a73f,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapadeb7ded-97')
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.516 185478 INFO nova.virt.libvirt.driver [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Deleting instance files /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329_del
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.517 185478 INFO nova.virt.libvirt.driver [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Deletion of /var/lib/nova/instances/bf9485c0-8711-436a-aad0-658ecba71329_del complete
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.582 185478 INFO nova.compute.manager [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.583 185478 DEBUG oslo.service.loopingcall [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.585 185478 DEBUG nova.compute.manager [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.585 185478 DEBUG nova.network.neutron [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.599 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:03:33 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:33.599 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:03:33 compute-0 nova_compute[185474]: 2026-01-05 15:03:33.599 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:33 compute-0 rsyslogd[237079]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 15:03:33.506 185478 DEBUG nova.virt.libvirt.vif [None req-a29f62e8-0b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.115 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.831 185478 DEBUG nova.network.neutron [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.848 185478 INFO nova.compute.manager [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Took 1.26 seconds to deallocate network for instance.
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.885 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.886 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:34 compute-0 nova_compute[185474]: 2026-01-05 15:03:34.990 185478 DEBUG nova.compute.provider_tree [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.007 185478 DEBUG nova.scheduler.client.report [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.036 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.065 185478 INFO nova.scheduler.client.report [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Deleted allocations for instance bf9485c0-8711-436a-aad0-658ecba71329
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.148 185478 DEBUG oslo_concurrency.lockutils [None req-a29f62e8-0bb6-4d85-9b98-f9264be02f75 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.623 185478 DEBUG nova.compute.manager [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.625 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "bf9485c0-8711-436a-aad0-658ecba71329-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.626 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.627 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "bf9485c0-8711-436a-aad0-658ecba71329-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.628 185478 DEBUG nova.compute.manager [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] No waiting events found dispatching network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.629 185478 WARNING nova.compute.manager [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received unexpected event network-vif-plugged-adeb7ded-97b9-4df8-bd1a-dbc14421a73f for instance with vm_state deleted and task_state None.
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.630 185478 DEBUG nova.compute.manager [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Received event network-changed-adeb7ded-97b9-4df8-bd1a-dbc14421a73f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.631 185478 DEBUG nova.compute.manager [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Refreshing instance network info cache due to event network-changed-adeb7ded-97b9-4df8-bd1a-dbc14421a73f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.632 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.633 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.634 185478 DEBUG nova.network.neutron [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Refreshing network info cache for port adeb7ded-97b9-4df8-bd1a-dbc14421a73f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:03:35 compute-0 nova_compute[185474]: 2026-01-05 15:03:35.809 185478 DEBUG nova.network.neutron [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:03:36 compute-0 nova_compute[185474]: 2026-01-05 15:03:36.167 185478 DEBUG nova.network.neutron [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 05 15:03:36 compute-0 nova_compute[185474]: 2026-01-05 15:03:36.168 185478 DEBUG oslo_concurrency.lockutils [req-fe2e0457-f770-4864-97b2-f40ae75425bc req-b5348407-2ea4-40fe-a570-6b87bc68f32f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-bf9485c0-8711-436a-aad0-658ecba71329" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:03:38 compute-0 nova_compute[185474]: 2026-01-05 15:03:38.510 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:39 compute-0 nova_compute[185474]: 2026-01-05 15:03:39.118 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:40 compute-0 podman[248032]: 2026-01-05 15:03:40.678636708 +0000 UTC m=+0.141102892 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:03:40 compute-0 podman[248031]: 2026-01-05 15:03:40.704888556 +0000 UTC m=+0.174795661 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:03:43 compute-0 nova_compute[185474]: 2026-01-05 15:03:43.514 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:43 compute-0 podman[248075]: 2026-01-05 15:03:43.602181155 +0000 UTC m=+0.091270771 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 15:03:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:43.602 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:44 compute-0 nova_compute[185474]: 2026-01-05 15:03:44.120 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:44.821 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:44.822 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:44.823 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.369 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.370 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.370 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.370 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.371 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.372 185478 INFO nova.compute.manager [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Terminating instance
Jan 05 15:03:46 compute-0 nova_compute[185474]: 2026-01-05 15:03:46.373 185478 DEBUG nova.compute.manager [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:03:47 compute-0 kernel: tapc6393a71-e6 (unregistering): left promiscuous mode
Jan 05 15:03:47 compute-0 NetworkManager[56139]: <info>  [1767625427.2364] device (tapc6393a71-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:03:47 compute-0 ovn_controller[97763]: 2026-01-05T15:03:47Z|00061|binding|INFO|Releasing lport c6393a71-e622-49d1-97df-e208cd2c8f06 from this chassis (sb_readonly=0)
Jan 05 15:03:47 compute-0 ovn_controller[97763]: 2026-01-05T15:03:47Z|00062|binding|INFO|Setting lport c6393a71-e622-49d1-97df-e208cd2c8f06 down in Southbound
Jan 05 15:03:47 compute-0 ovn_controller[97763]: 2026-01-05T15:03:47Z|00063|binding|INFO|Removing iface tapc6393a71-e6 ovn-installed in OVS
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.251 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.275 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:47 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 05 15:03:47 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 20.070s CPU time.
Jan 05 15:03:47 compute-0 systemd-machined[156786]: Machine qemu-1-instance-00000001 terminated.
Jan 05 15:03:47 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:47.369 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:7f:70 192.168.0.178'], port_security=['fa:16:3e:f3:7f:70 192.168.0.178'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.178/24', 'neutron:device_id': '731f6e65-e951-4af3-aaf3-0322c02b154c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-905a1599-2980-4b24-9705-76e3c8a469ea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54417029b2fb4b749e20754214013802', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a131d1b-ed26-4729-8c09-f87c7299dcd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9f4be22-b417-4efb-ba81-f8a9c3c4527d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=c6393a71-e622-49d1-97df-e208cd2c8f06) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:03:47 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:47.370 107222 INFO neutron.agent.ovn.metadata.agent [-] Port c6393a71-e622-49d1-97df-e208cd2c8f06 in datapath 905a1599-2980-4b24-9705-76e3c8a469ea unbound from our chassis
Jan 05 15:03:47 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:47.371 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 905a1599-2980-4b24-9705-76e3c8a469ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:03:47 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:47.373 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[60fe198b-2b83-4ca0-857d-a350c7ca1eb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:47 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:47.373 107222 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea namespace which is not needed anymore
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.495 185478 INFO nova.virt.libvirt.driver [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Instance destroyed successfully.
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.496 185478 DEBUG nova.objects.instance [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lazy-loading 'resources' on Instance uuid 731f6e65-e951-4af3-aaf3-0322c02b154c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.969 185478 DEBUG nova.virt.libvirt.vif [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T14:45:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T14:45:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54417029b2fb4b749e20754214013802',ramdisk_id='',reservation_id='r-6qqwyv3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='22e54d95-dd91-4f66-a65f-ce9984e648dc',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T14:45:32Z,user_data=None,user_id='4c0cf318026a40748762c9e05cd1efe0',uuid=731f6e65-e951-4af3-aaf3-0322c02b154c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.970 185478 DEBUG nova.network.os_vif_util [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converting VIF {"id": "c6393a71-e622-49d1-97df-e208cd2c8f06", "address": "fa:16:3e:f3:7f:70", "network": {"id": "905a1599-2980-4b24-9705-76e3c8a469ea", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.178", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54417029b2fb4b749e20754214013802", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6393a71-e6", "ovs_interfaceid": "c6393a71-e622-49d1-97df-e208cd2c8f06", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.971 185478 DEBUG nova.network.os_vif_util [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.973 185478 DEBUG os_vif [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.975 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.976 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6393a71-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.980 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.984 185478 INFO os_vif [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:7f:70,bridge_name='br-int',has_traffic_filtering=True,id=c6393a71-e622-49d1-97df-e208cd2c8f06,network=Network(905a1599-2980-4b24-9705-76e3c8a469ea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6393a71-e6')
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.985 185478 INFO nova.virt.libvirt.driver [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Deleting instance files /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c_del
Jan 05 15:03:47 compute-0 nova_compute[185474]: 2026-01-05 15:03:47.986 185478 INFO nova.virt.libvirt.driver [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Deletion of /var/lib/nova/instances/731f6e65-e951-4af3-aaf3-0322c02b154c_del complete
Jan 05 15:03:48 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [NOTICE]   (239934) : haproxy version is 2.8.14-c23fe91
Jan 05 15:03:48 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [NOTICE]   (239934) : path to executable is /usr/sbin/haproxy
Jan 05 15:03:48 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [WARNING]  (239934) : Exiting Master process...
Jan 05 15:03:48 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [ALERT]    (239934) : Current worker (239936) exited with code 143 (Terminated)
Jan 05 15:03:48 compute-0 neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea[239930]: [WARNING]  (239934) : All workers exited. Exiting... (0)
Jan 05 15:03:48 compute-0 systemd[1]: libpod-f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c.scope: Deactivated successfully.
Jan 05 15:03:48 compute-0 podman[248140]: 2026-01-05 15:03:48.034553686 +0000 UTC m=+0.506521709 container died f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 05 15:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c-userdata-shm.mount: Deactivated successfully.
Jan 05 15:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-114e2ae5f10836ef271e2f8657dd1cf97aaf34ae3ba202a3294a00f2eaad14ca-merged.mount: Deactivated successfully.
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.485 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625413.4835181, bf9485c0-8711-436a-aad0-658ecba71329 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.488 185478 INFO nova.compute.manager [-] [instance: bf9485c0-8711-436a-aad0-658ecba71329] VM Stopped (Lifecycle Event)
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.507 185478 INFO nova.compute.manager [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Took 2.13 seconds to destroy the instance on the hypervisor.
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.509 185478 DEBUG oslo.service.loopingcall [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.510 185478 DEBUG nova.compute.manager [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.510 185478 DEBUG nova.network.neutron [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:03:48 compute-0 podman[248140]: 2026-01-05 15:03:48.51194501 +0000 UTC m=+0.983913003 container cleanup f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.518 185478 DEBUG nova.compute.manager [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-unplugged-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.518 185478 DEBUG oslo_concurrency.lockutils [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.519 185478 DEBUG oslo_concurrency.lockutils [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.519 185478 DEBUG oslo_concurrency.lockutils [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.519 185478 DEBUG nova.compute.manager [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] No waiting events found dispatching network-vif-unplugged-c6393a71-e622-49d1-97df-e208cd2c8f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.519 185478 DEBUG nova.compute.manager [req-91f996f3-0cb9-4b03-824c-9ec7364f1700 req-492d639e-11a5-4a00-af54-09ff3217c7cc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-unplugged-c6393a71-e622-49d1-97df-e208cd2c8f06 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 15:03:48 compute-0 systemd[1]: libpod-conmon-f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c.scope: Deactivated successfully.
Jan 05 15:03:48 compute-0 nova_compute[185474]: 2026-01-05 15:03:48.541 185478 DEBUG nova.compute.manager [None req-b796618f-0598-4c56-b09d-1cc70a671558 - - - - - -] [instance: bf9485c0-8711-436a-aad0-658ecba71329] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.123 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:49 compute-0 podman[248167]: 2026-01-05 15:03:49.693542268 +0000 UTC m=+1.144022757 container remove f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.711 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.712 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.712 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.712 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.716 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[ac991137-98b7-4a11-94fe-89f9a427b504]: (4, ('Mon Jan  5 03:03:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea (f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c)\nf4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c\nMon Jan  5 03:03:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea (f4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c)\nf4aede34683b51bde752eca2fabaeb390b133b0ddd2fbd0c511838850fbf398c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.718 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[50447033-756d-4df3-9c94-1e5c3585c1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.722 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905a1599-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.728 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:49 compute-0 kernel: tap905a1599-20: left promiscuous mode
Jan 05 15:03:49 compute-0 nova_compute[185474]: 2026-01-05 15:03:49.752 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.758 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6dfba7-9fcd-450f-8c95-1b3ca8ba4dea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.773 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a705d22d-de2f-4518-b365-5b2b7b46001a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.775 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f81fc100-c6d0-4640-8a6f-7046afd975a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.800 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ca9ce3-aecb-4ad8-9d87-26bca9cfac84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 366211, 'reachable_time': 37298, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248184, 'error': None, 'target': 'ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.817 107613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-905a1599-2980-4b24-9705-76e3c8a469ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 15:03:49 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:03:49.818 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[e52be01f-7dfe-4a17-b504-11116db65f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:03:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d905a1599\x2d2980\x2d4b24\x2d9705\x2d76e3c8a469ea.mount: Deactivated successfully.
Jan 05 15:03:50 compute-0 nova_compute[185474]: 2026-01-05 15:03:50.155 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:03:50 compute-0 nova_compute[185474]: 2026-01-05 15:03:50.157 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5360MB free_disk=72.41460800170898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:03:50 compute-0 nova_compute[185474]: 2026-01-05 15:03:50.157 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:50 compute-0 nova_compute[185474]: 2026-01-05 15:03:50.157 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:52 compute-0 podman[248187]: 2026-01-05 15:03:52.658181391 +0000 UTC m=+0.138815041 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 15:03:52 compute-0 nova_compute[185474]: 2026-01-05 15:03:52.981 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 731f6e65-e951-4af3-aaf3-0322c02b154c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.027 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.037 185478 DEBUG nova.compute.manager [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.038 185478 DEBUG oslo_concurrency.lockutils [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.038 185478 DEBUG oslo_concurrency.lockutils [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.038 185478 DEBUG oslo_concurrency.lockutils [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.038 185478 DEBUG nova.compute.manager [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] No waiting events found dispatching network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.039 185478 WARNING nova.compute.manager [req-f34dfdf9-6609-49b3-917c-e2d24209f2e3 req-3e382528-fe67-4c4d-9755-da08996ab8bc 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received unexpected event network-vif-plugged-c6393a71-e622-49d1-97df-e208cd2c8f06 for instance with vm_state active and task_state deleting.
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.086 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.112 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.122 185478 DEBUG nova.network.neutron [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.125 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.168 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.169 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.170 185478 INFO nova.compute.manager [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Took 5.66 seconds to deallocate network for instance.
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.207 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.208 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.262 185478 DEBUG nova.compute.provider_tree [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.302 185478 DEBUG nova.scheduler.client.report [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.337 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.431 185478 INFO nova.scheduler.client.report [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Deleted allocations for instance 731f6e65-e951-4af3-aaf3-0322c02b154c
Jan 05 15:03:54 compute-0 nova_compute[185474]: 2026-01-05 15:03:54.520 185478 DEBUG oslo_concurrency.lockutils [None req-b5352088-c8d0-4c8a-83f0-75d3530328a4 4c0cf318026a40748762c9e05cd1efe0 54417029b2fb4b749e20754214013802 - - default default] Lock "731f6e65-e951-4af3-aaf3-0322c02b154c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.170 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.171 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.192 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.193 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.194 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.195 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.402 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.439 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:03:55 compute-0 nova_compute[185474]: 2026-01-05 15:03:55.440 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:56 compute-0 nova_compute[185474]: 2026-01-05 15:03:56.194 185478 DEBUG nova.compute.manager [req-512fcc24-8a43-43e5-98e4-0b05cb2d0e0b req-62a55ba6-b321-43bc-99ca-6acf9c9606a3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Received event network-vif-deleted-c6393a71-e622-49d1-97df-e208cd2c8f06 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:03:57 compute-0 nova_compute[185474]: 2026-01-05 15:03:57.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:03:57 compute-0 nova_compute[185474]: 2026-01-05 15:03:57.985 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:59 compute-0 nova_compute[185474]: 2026-01-05 15:03:59.127 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:03:59 compute-0 podman[248205]: 2026-01-05 15:03:59.617148472 +0000 UTC m=+0.107019145 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, config_id=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 15:03:59 compute-0 podman[201880]: time="2026-01-05T15:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:03:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:03:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3907 "" "Go-http-client/1.1"
Jan 05 15:04:01 compute-0 openstack_network_exporter[205179]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:04:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:04:01 compute-0 openstack_network_exporter[205179]: ERROR   15:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:04:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:04:02 compute-0 nova_compute[185474]: 2026-01-05 15:04:02.490 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625427.4850905, 731f6e65-e951-4af3-aaf3-0322c02b154c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:04:02 compute-0 nova_compute[185474]: 2026-01-05 15:04:02.491 185478 INFO nova.compute.manager [-] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] VM Stopped (Lifecycle Event)
Jan 05 15:04:02 compute-0 nova_compute[185474]: 2026-01-05 15:04:02.569 185478 DEBUG nova.compute.manager [None req-9ac2514a-dde5-45b5-961a-6de3a3bd88b1 - - - - - -] [instance: 731f6e65-e951-4af3-aaf3-0322c02b154c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:04:02 compute-0 nova_compute[185474]: 2026-01-05 15:04:02.990 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:03 compute-0 podman[248225]: 2026-01-05 15:04:03.620691246 +0000 UTC m=+0.097175699 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 15:04:03 compute-0 podman[248226]: 2026-01-05 15:04:03.62085356 +0000 UTC m=+0.094359773 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:04:03 compute-0 podman[248227]: 2026-01-05 15:04:03.728794448 +0000 UTC m=+0.185232992 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 15:04:04 compute-0 nova_compute[185474]: 2026-01-05 15:04:04.129 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:07 compute-0 nova_compute[185474]: 2026-01-05 15:04:07.993 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:09 compute-0 nova_compute[185474]: 2026-01-05 15:04:09.131 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:11 compute-0 podman[248287]: 2026-01-05 15:04:11.620791299 +0000 UTC m=+0.097725485 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 15:04:11 compute-0 podman[248286]: 2026-01-05 15:04:11.625811903 +0000 UTC m=+0.118464813 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 15:04:12 compute-0 nova_compute[185474]: 2026-01-05 15:04:12.997 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:14 compute-0 nova_compute[185474]: 2026-01-05 15:04:14.134 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:14 compute-0 podman[248330]: 2026-01-05 15:04:14.644574824 +0000 UTC m=+0.123389485 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, version=9.4, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0)
Jan 05 15:04:18 compute-0 nova_compute[185474]: 2026-01-05 15:04:18.002 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:19 compute-0 nova_compute[185474]: 2026-01-05 15:04:19.137 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:23 compute-0 nova_compute[185474]: 2026-01-05 15:04:23.006 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:23 compute-0 podman[248351]: 2026-01-05 15:04:23.624885788 +0000 UTC m=+0.108817103 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 15:04:24 compute-0 ovn_controller[97763]: 2026-01-05T15:04:24Z|00064|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Jan 05 15:04:24 compute-0 nova_compute[185474]: 2026-01-05 15:04:24.140 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:28 compute-0 nova_compute[185474]: 2026-01-05 15:04:28.009 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:29 compute-0 nova_compute[185474]: 2026-01-05 15:04:29.144 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:29 compute-0 podman[201880]: time="2026-01-05T15:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:04:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:04:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 05 15:04:30 compute-0 podman[248370]: 2026-01-05 15:04:30.578458892 +0000 UTC m=+0.068933369 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64)
Jan 05 15:04:31 compute-0 openstack_network_exporter[205179]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:04:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:04:31 compute-0 openstack_network_exporter[205179]: ERROR   15:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:04:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:04:33 compute-0 nova_compute[185474]: 2026-01-05 15:04:33.018 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:34 compute-0 nova_compute[185474]: 2026-01-05 15:04:34.148 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:34 compute-0 podman[248392]: 2026-01-05 15:04:34.628437269 +0000 UTC m=+0.096025160 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 05 15:04:34 compute-0 podman[248391]: 2026-01-05 15:04:34.628639224 +0000 UTC m=+0.109207174 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:04:34 compute-0 podman[248393]: 2026-01-05 15:04:34.663856932 +0000 UTC m=+0.134366991 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.755 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.756 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.756 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.760 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.763 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.766 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.769 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.769 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.770 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.771 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faebb6cbfe0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 rsyslogd[237079]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:04:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:04:37 compute-0 rsyslogd[237079]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 15:04:38 compute-0 nova_compute[185474]: 2026-01-05 15:04:38.022 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:39 compute-0 nova_compute[185474]: 2026-01-05 15:04:39.152 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:42 compute-0 podman[248455]: 2026-01-05 15:04:42.658817187 +0000 UTC m=+0.136844489 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:04:42 compute-0 podman[248456]: 2026-01-05 15:04:42.665656221 +0000 UTC m=+0.137549078 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 15:04:43 compute-0 nova_compute[185474]: 2026-01-05 15:04:43.025 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:44 compute-0 nova_compute[185474]: 2026-01-05 15:04:44.154 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:04:44.822 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:04:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:04:44.823 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:04:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:04:44.824 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:04:44 compute-0 podman[248495]: 2026-01-05 15:04:44.853121552 +0000 UTC m=+0.148980165 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-container, config_id=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 15:04:48 compute-0 nova_compute[185474]: 2026-01-05 15:04:48.029 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.156 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.717 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.718 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.718 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:04:49 compute-0 nova_compute[185474]: 2026-01-05 15:04:49.719 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.115 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.116 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5360MB free_disk=72.41466522216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.116 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.117 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.284 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.285 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.307 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.328 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.420 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:04:50 compute-0 nova_compute[185474]: 2026-01-05 15:04:50.420 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:04:51 compute-0 nova_compute[185474]: 2026-01-05 15:04:51.422 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:51 compute-0 nova_compute[185474]: 2026-01-05 15:04:51.422 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:51 compute-0 nova_compute[185474]: 2026-01-05 15:04:51.423 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:04:52 compute-0 nova_compute[185474]: 2026-01-05 15:04:52.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:53 compute-0 nova_compute[185474]: 2026-01-05 15:04:53.033 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:53 compute-0 nova_compute[185474]: 2026-01-05 15:04:53.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:54 compute-0 nova_compute[185474]: 2026-01-05 15:04:54.159 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:54 compute-0 nova_compute[185474]: 2026-01-05 15:04:54.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:54 compute-0 podman[248514]: 2026-01-05 15:04:54.629578059 +0000 UTC m=+0.121418272 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 15:04:54 compute-0 sshd-session[248534]: Invalid user solana from 165.22.168.95 port 37800
Jan 05 15:04:54 compute-0 sshd-session[248534]: Connection closed by invalid user solana 165.22.168.95 port 37800 [preauth]
Jan 05 15:04:57 compute-0 nova_compute[185474]: 2026-01-05 15:04:57.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:57 compute-0 nova_compute[185474]: 2026-01-05 15:04:57.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:04:57 compute-0 nova_compute[185474]: 2026-01-05 15:04:57.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:04:57 compute-0 nova_compute[185474]: 2026-01-05 15:04:57.426 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:04:57 compute-0 nova_compute[185474]: 2026-01-05 15:04:57.427 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:58 compute-0 nova_compute[185474]: 2026-01-05 15:04:58.037 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:59 compute-0 nova_compute[185474]: 2026-01-05 15:04:59.161 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:04:59 compute-0 nova_compute[185474]: 2026-01-05 15:04:59.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:04:59 compute-0 podman[201880]: time="2026-01-05T15:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:04:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:04:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3909 "" "Go-http-client/1.1"
Jan 05 15:05:01 compute-0 openstack_network_exporter[205179]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:05:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:05:01 compute-0 openstack_network_exporter[205179]: ERROR   15:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:05:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:05:01 compute-0 podman[248536]: 2026-01-05 15:05:01.634895938 +0000 UTC m=+0.120546579 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 15:05:03 compute-0 nova_compute[185474]: 2026-01-05 15:05:03.040 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:04 compute-0 nova_compute[185474]: 2026-01-05 15:05:04.164 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:05 compute-0 podman[248556]: 2026-01-05 15:05:05.695690076 +0000 UTC m=+0.171396019 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:05:05 compute-0 podman[248557]: 2026-01-05 15:05:05.699079757 +0000 UTC m=+0.148366928 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 15:05:05 compute-0 podman[248558]: 2026-01-05 15:05:05.719763704 +0000 UTC m=+0.168445829 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 15:05:08 compute-0 nova_compute[185474]: 2026-01-05 15:05:08.044 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:09 compute-0 nova_compute[185474]: 2026-01-05 15:05:09.170 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:13 compute-0 nova_compute[185474]: 2026-01-05 15:05:13.049 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:13 compute-0 podman[248623]: 2026-01-05 15:05:13.634049494 +0000 UTC m=+0.115629436 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:05:13 compute-0 podman[248622]: 2026-01-05 15:05:13.639519532 +0000 UTC m=+0.120710574 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:05:14 compute-0 nova_compute[185474]: 2026-01-05 15:05:14.175 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:15 compute-0 podman[248663]: 2026-01-05 15:05:15.642593984 +0000 UTC m=+0.124974197 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=)
Jan 05 15:05:18 compute-0 nova_compute[185474]: 2026-01-05 15:05:18.053 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:19 compute-0 nova_compute[185474]: 2026-01-05 15:05:19.177 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:23 compute-0 nova_compute[185474]: 2026-01-05 15:05:23.057 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:24 compute-0 nova_compute[185474]: 2026-01-05 15:05:24.180 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:25 compute-0 podman[248682]: 2026-01-05 15:05:25.64503278 +0000 UTC m=+0.124639410 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute)
Jan 05 15:05:28 compute-0 nova_compute[185474]: 2026-01-05 15:05:28.062 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:29 compute-0 nova_compute[185474]: 2026-01-05 15:05:29.182 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:29 compute-0 podman[201880]: time="2026-01-05T15:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:05:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:05:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 05 15:05:31 compute-0 openstack_network_exporter[205179]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:05:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:05:31 compute-0 openstack_network_exporter[205179]: ERROR   15:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:05:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:05:32 compute-0 podman[248702]: 2026-01-05 15:05:32.606130676 +0000 UTC m=+0.091223629 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6)
Jan 05 15:05:33 compute-0 nova_compute[185474]: 2026-01-05 15:05:33.067 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:34 compute-0 nova_compute[185474]: 2026-01-05 15:05:34.188 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:36 compute-0 podman[248723]: 2026-01-05 15:05:36.600804133 +0000 UTC m=+0.082226678 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 15:05:36 compute-0 podman[248722]: 2026-01-05 15:05:36.612011325 +0000 UTC m=+0.099330979 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:05:36 compute-0 podman[248724]: 2026-01-05 15:05:36.669864563 +0000 UTC m=+0.134085714 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 15:05:38 compute-0 nova_compute[185474]: 2026-01-05 15:05:38.071 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:39 compute-0 nova_compute[185474]: 2026-01-05 15:05:39.191 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:43 compute-0 nova_compute[185474]: 2026-01-05 15:05:43.075 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:44 compute-0 nova_compute[185474]: 2026-01-05 15:05:44.193 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:44 compute-0 podman[248787]: 2026-01-05 15:05:44.611945583 +0000 UTC m=+0.089034769 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:05:44 compute-0 podman[248786]: 2026-01-05 15:05:44.618417118 +0000 UTC m=+0.111553256 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:05:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:05:44.823 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:05:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:05:44.824 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:05:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:05:44.824 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:05:46 compute-0 podman[248828]: 2026-01-05 15:05:46.642807024 +0000 UTC m=+0.138635805 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543)
Jan 05 15:05:48 compute-0 nova_compute[185474]: 2026-01-05 15:05:48.080 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:49 compute-0 nova_compute[185474]: 2026-01-05 15:05:49.197 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.459 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.460 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.460 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.460 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.834 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.836 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5364MB free_disk=72.41466522216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.836 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:05:50 compute-0 nova_compute[185474]: 2026-01-05 15:05:50.837 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.432 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.432 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.539 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.637 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.638 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.656 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.676 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.706 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.925 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.926 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:05:51 compute-0 nova_compute[185474]: 2026-01-05 15:05:51.926 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:05:52 compute-0 nova_compute[185474]: 2026-01-05 15:05:52.929 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:52 compute-0 nova_compute[185474]: 2026-01-05 15:05:52.929 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:52 compute-0 nova_compute[185474]: 2026-01-05 15:05:52.930 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:52 compute-0 nova_compute[185474]: 2026-01-05 15:05:52.930 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:05:53 compute-0 nova_compute[185474]: 2026-01-05 15:05:53.083 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:53 compute-0 nova_compute[185474]: 2026-01-05 15:05:53.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:54 compute-0 nova_compute[185474]: 2026-01-05 15:05:54.200 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:54 compute-0 nova_compute[185474]: 2026-01-05 15:05:54.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:56 compute-0 podman[248848]: 2026-01-05 15:05:56.58136673 +0000 UTC m=+0.072789412 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 15:05:57 compute-0 nova_compute[185474]: 2026-01-05 15:05:57.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:57 compute-0 nova_compute[185474]: 2026-01-05 15:05:57.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:05:57 compute-0 nova_compute[185474]: 2026-01-05 15:05:57.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:05:57 compute-0 nova_compute[185474]: 2026-01-05 15:05:57.418 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:05:58 compute-0 nova_compute[185474]: 2026-01-05 15:05:58.086 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:58 compute-0 nova_compute[185474]: 2026-01-05 15:05:58.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:59 compute-0 nova_compute[185474]: 2026-01-05 15:05:59.204 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:05:59 compute-0 nova_compute[185474]: 2026-01-05 15:05:59.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:05:59 compute-0 podman[201880]: time="2026-01-05T15:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:05:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:05:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 05 15:06:01 compute-0 nova_compute[185474]: 2026-01-05 15:06:01.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:01 compute-0 nova_compute[185474]: 2026-01-05 15:06:01.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:01 compute-0 nova_compute[185474]: 2026-01-05 15:06:01.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 15:06:01 compute-0 nova_compute[185474]: 2026-01-05 15:06:01.416 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 15:06:01 compute-0 nova_compute[185474]: 2026-01-05 15:06:01.417 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:01 compute-0 openstack_network_exporter[205179]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:06:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:06:01 compute-0 openstack_network_exporter[205179]: ERROR   15:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:06:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:06:03 compute-0 nova_compute[185474]: 2026-01-05 15:06:03.089 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:03 compute-0 podman[248868]: 2026-01-05 15:06:03.628797891 +0000 UTC m=+0.118220047 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 15:06:04 compute-0 nova_compute[185474]: 2026-01-05 15:06:04.207 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:06 compute-0 nova_compute[185474]: 2026-01-05 15:06:06.984 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:07 compute-0 podman[248889]: 2026-01-05 15:06:07.599332267 +0000 UTC m=+0.085560347 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:06:07 compute-0 podman[248890]: 2026-01-05 15:06:07.604062604 +0000 UTC m=+0.090545910 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 15:06:07 compute-0 podman[248891]: 2026-01-05 15:06:07.661153622 +0000 UTC m=+0.140699251 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 15:06:08 compute-0 nova_compute[185474]: 2026-01-05 15:06:08.092 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:09 compute-0 nova_compute[185474]: 2026-01-05 15:06:09.210 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:11 compute-0 nova_compute[185474]: 2026-01-05 15:06:11.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:11 compute-0 nova_compute[185474]: 2026-01-05 15:06:11.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 15:06:13 compute-0 nova_compute[185474]: 2026-01-05 15:06:13.096 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:14 compute-0 nova_compute[185474]: 2026-01-05 15:06:14.213 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:14 compute-0 podman[248956]: 2026-01-05 15:06:14.806792401 +0000 UTC m=+0.121273017 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:06:14 compute-0 podman[248955]: 2026-01-05 15:06:14.80860558 +0000 UTC m=+0.137518316 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Jan 05 15:06:14 compute-0 nova_compute[185474]: 2026-01-05 15:06:14.942 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:17 compute-0 podman[248999]: 2026-01-05 15:06:17.625881681 +0000 UTC m=+0.103078098 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, version=9.4, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64)
Jan 05 15:06:18 compute-0 nova_compute[185474]: 2026-01-05 15:06:18.101 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:19 compute-0 nova_compute[185474]: 2026-01-05 15:06:19.216 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:23 compute-0 nova_compute[185474]: 2026-01-05 15:06:23.105 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:24 compute-0 nova_compute[185474]: 2026-01-05 15:06:24.218 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:27 compute-0 podman[249019]: 2026-01-05 15:06:27.647280577 +0000 UTC m=+0.117877077 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 15:06:28 compute-0 nova_compute[185474]: 2026-01-05 15:06:28.110 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:29 compute-0 nova_compute[185474]: 2026-01-05 15:06:29.221 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:29 compute-0 podman[201880]: time="2026-01-05T15:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:06:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:06:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3913 "" "Go-http-client/1.1"
Jan 05 15:06:31 compute-0 openstack_network_exporter[205179]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:06:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:06:31 compute-0 openstack_network_exporter[205179]: ERROR   15:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:06:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:06:33 compute-0 nova_compute[185474]: 2026-01-05 15:06:33.115 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:34 compute-0 nova_compute[185474]: 2026-01-05 15:06:34.225 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:34 compute-0 podman[249038]: 2026-01-05 15:06:34.633341288 +0000 UTC m=+0.117973460 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.757 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.757 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.762 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.765 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.766 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.766 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.769 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb89cfaa0>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'network.outgoing.packets.drop': [], 'disk.device.write.bytes': [], 'disk.ephemeral.size': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'cpu': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.778 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:06:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:06:38 compute-0 nova_compute[185474]: 2026-01-05 15:06:38.120 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:38 compute-0 podman[249061]: 2026-01-05 15:06:38.588770667 +0000 UTC m=+0.072287829 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 15:06:38 compute-0 podman[249060]: 2026-01-05 15:06:38.601713006 +0000 UTC m=+0.092592495 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:06:38 compute-0 podman[249062]: 2026-01-05 15:06:38.671420374 +0000 UTC m=+0.150596408 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 15:06:39 compute-0 nova_compute[185474]: 2026-01-05 15:06:39.228 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:43 compute-0 nova_compute[185474]: 2026-01-05 15:06:43.124 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:44 compute-0 nova_compute[185474]: 2026-01-05 15:06:44.231 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:06:44.824 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:06:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:06:44.824 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:06:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:06:44.825 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:06:45 compute-0 podman[249124]: 2026-01-05 15:06:45.624465272 +0000 UTC m=+0.107920319 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:06:45 compute-0 podman[249125]: 2026-01-05 15:06:45.646059213 +0000 UTC m=+0.125997295 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:06:48 compute-0 nova_compute[185474]: 2026-01-05 15:06:48.129 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:48 compute-0 podman[249164]: 2026-01-05 15:06:48.629539375 +0000 UTC m=+0.114870116 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Jan 05 15:06:49 compute-0 nova_compute[185474]: 2026-01-05 15:06:49.233 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:51 compute-0 nova_compute[185474]: 2026-01-05 15:06:51.420 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.799 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.799 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.800 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:06:52 compute-0 nova_compute[185474]: 2026-01-05 15:06:52.800 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.132 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.154 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.155 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.41466522216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.155 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.156 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.682 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.683 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.709 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.950 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.953 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:06:53 compute-0 nova_compute[185474]: 2026-01-05 15:06:53.954 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:06:54 compute-0 nova_compute[185474]: 2026-01-05 15:06:54.235 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:55 compute-0 nova_compute[185474]: 2026-01-05 15:06:55.956 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:55 compute-0 nova_compute[185474]: 2026-01-05 15:06:55.957 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:56 compute-0 nova_compute[185474]: 2026-01-05 15:06:56.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.137 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:06:58 compute-0 podman[249185]: 2026-01-05 15:06:58.608698531 +0000 UTC m=+0.093529661 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.763 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:06:58 compute-0 nova_compute[185474]: 2026-01-05 15:06:58.764 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:06:59 compute-0 nova_compute[185474]: 2026-01-05 15:06:59.237 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:06:59 compute-0 podman[201880]: time="2026-01-05T15:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:06:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:06:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Jan 05 15:07:01 compute-0 openstack_network_exporter[205179]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:07:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:07:01 compute-0 openstack_network_exporter[205179]: ERROR   15:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:07:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:07:03 compute-0 nova_compute[185474]: 2026-01-05 15:07:03.141 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:03 compute-0 nova_compute[185474]: 2026-01-05 15:07:03.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:04 compute-0 nova_compute[185474]: 2026-01-05 15:07:04.240 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:05 compute-0 podman[249205]: 2026-01-05 15:07:05.601349908 +0000 UTC m=+0.086009799 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Jan 05 15:07:08 compute-0 nova_compute[185474]: 2026-01-05 15:07:08.145 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:09 compute-0 nova_compute[185474]: 2026-01-05 15:07:09.244 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:09 compute-0 podman[249226]: 2026-01-05 15:07:09.405793848 +0000 UTC m=+0.133072797 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:07:09 compute-0 podman[249227]: 2026-01-05 15:07:09.407820513 +0000 UTC m=+0.131448623 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:07:09 compute-0 podman[249228]: 2026-01-05 15:07:09.424672357 +0000 UTC m=+0.131038112 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 05 15:07:13 compute-0 nova_compute[185474]: 2026-01-05 15:07:13.149 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:14 compute-0 nova_compute[185474]: 2026-01-05 15:07:14.247 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:16 compute-0 podman[249292]: 2026-01-05 15:07:16.604523289 +0000 UTC m=+0.091625800 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:07:16 compute-0 podman[249291]: 2026-01-05 15:07:16.619498232 +0000 UTC m=+0.097872808 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 15:07:18 compute-0 nova_compute[185474]: 2026-01-05 15:07:18.153 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:19 compute-0 nova_compute[185474]: 2026-01-05 15:07:19.250 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:19 compute-0 podman[249335]: 2026-01-05 15:07:19.611412779 +0000 UTC m=+0.101890606 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0)
Jan 05 15:07:23 compute-0 nova_compute[185474]: 2026-01-05 15:07:23.157 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:24 compute-0 nova_compute[185474]: 2026-01-05 15:07:24.254 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:28 compute-0 nova_compute[185474]: 2026-01-05 15:07:28.162 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:29 compute-0 nova_compute[185474]: 2026-01-05 15:07:29.258 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:29 compute-0 podman[249355]: 2026-01-05 15:07:29.589534879 +0000 UTC m=+0.076597516 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 15:07:29 compute-0 podman[201880]: time="2026-01-05T15:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:07:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:07:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3911 "" "Go-http-client/1.1"
Jan 05 15:07:31 compute-0 openstack_network_exporter[205179]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:07:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:07:31 compute-0 openstack_network_exporter[205179]: ERROR   15:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:07:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:07:33 compute-0 nova_compute[185474]: 2026-01-05 15:07:33.167 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:34 compute-0 nova_compute[185474]: 2026-01-05 15:07:34.261 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:36 compute-0 podman[249375]: 2026-01-05 15:07:36.610180719 +0000 UTC m=+0.094922958 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, vcs-type=git)
Jan 05 15:07:38 compute-0 nova_compute[185474]: 2026-01-05 15:07:38.170 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:39 compute-0 nova_compute[185474]: 2026-01-05 15:07:39.263 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:39 compute-0 podman[249393]: 2026-01-05 15:07:39.600886313 +0000 UTC m=+0.079296828 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 15:07:39 compute-0 podman[249394]: 2026-01-05 15:07:39.635992069 +0000 UTC m=+0.110875569 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:07:39 compute-0 podman[249395]: 2026-01-05 15:07:39.671761513 +0000 UTC m=+0.135250766 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:07:43 compute-0 nova_compute[185474]: 2026-01-05 15:07:43.175 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:44 compute-0 nova_compute[185474]: 2026-01-05 15:07:44.265 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:07:44.825 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:07:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:07:44.825 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:07:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:07:44.826 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:07:47 compute-0 podman[249459]: 2026-01-05 15:07:47.620598254 +0000 UTC m=+0.099457251 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 15:07:47 compute-0 podman[249458]: 2026-01-05 15:07:47.647636683 +0000 UTC m=+0.133156220 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:07:48 compute-0 nova_compute[185474]: 2026-01-05 15:07:48.179 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:49 compute-0 nova_compute[185474]: 2026-01-05 15:07:49.269 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:49 compute-0 sshd-session[249497]: Invalid user solana from 165.22.168.95 port 33596
Jan 05 15:07:49 compute-0 sshd-session[249497]: Connection closed by invalid user solana 165.22.168.95 port 33596 [preauth]
Jan 05 15:07:50 compute-0 podman[249500]: 2026-01-05 15:07:50.615402363 +0000 UTC m=+0.104681674 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, version=9.4, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Jan 05 15:07:52 compute-0 nova_compute[185474]: 2026-01-05 15:07:52.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:53 compute-0 nova_compute[185474]: 2026-01-05 15:07:53.183 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:53 compute-0 nova_compute[185474]: 2026-01-05 15:07:53.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:53 compute-0 nova_compute[185474]: 2026-01-05 15:07:53.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:07:53 compute-0 nova_compute[185474]: 2026-01-05 15:07:53.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.023 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.024 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.024 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.025 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.271 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.557 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.558 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5364MB free_disk=72.41464614868164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.559 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.559 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.803 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.804 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.831 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.848 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.850 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:07:54 compute-0 nova_compute[185474]: 2026-01-05 15:07:54.851 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:07:57 compute-0 nova_compute[185474]: 2026-01-05 15:07:57.848 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:57 compute-0 nova_compute[185474]: 2026-01-05 15:07:57.848 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.186 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.431 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.432 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:58 compute-0 nova_compute[185474]: 2026-01-05 15:07:58.434 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:07:59 compute-0 nova_compute[185474]: 2026-01-05 15:07:59.273 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:07:59 compute-0 podman[201880]: time="2026-01-05T15:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:07:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:07:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3910 "" "Go-http-client/1.1"
Jan 05 15:08:00 compute-0 nova_compute[185474]: 2026-01-05 15:08:00.594 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:00.593 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:08:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:00.594 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:08:00 compute-0 podman[249520]: 2026-01-05 15:08:00.645177997 +0000 UTC m=+0.128160443 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 05 15:08:01 compute-0 openstack_network_exporter[205179]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:08:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:08:01 compute-0 openstack_network_exporter[205179]: ERROR   15:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:08:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:08:03 compute-0 nova_compute[185474]: 2026-01-05 15:08:03.191 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:03 compute-0 nova_compute[185474]: 2026-01-05 15:08:03.430 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:04 compute-0 nova_compute[185474]: 2026-01-05 15:08:04.276 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:05 compute-0 nova_compute[185474]: 2026-01-05 15:08:05.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:07 compute-0 podman[249540]: 2026-01-05 15:08:07.628075155 +0000 UTC m=+0.102006432 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350)
Jan 05 15:08:08 compute-0 nova_compute[185474]: 2026-01-05 15:08:08.196 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:09 compute-0 nova_compute[185474]: 2026-01-05 15:08:09.280 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:09 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:09.599 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:08:10 compute-0 podman[249560]: 2026-01-05 15:08:10.614697748 +0000 UTC m=+0.091753738 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:08:10 compute-0 podman[249561]: 2026-01-05 15:08:10.622108816 +0000 UTC m=+0.083748553 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 15:08:10 compute-0 podman[249562]: 2026-01-05 15:08:10.692144132 +0000 UTC m=+0.146957066 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 05 15:08:13 compute-0 nova_compute[185474]: 2026-01-05 15:08:13.201 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:14 compute-0 nova_compute[185474]: 2026-01-05 15:08:14.283 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:18 compute-0 nova_compute[185474]: 2026-01-05 15:08:18.205 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:18 compute-0 podman[249630]: 2026-01-05 15:08:18.59666464 +0000 UTC m=+0.080997310 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:08:18 compute-0 podman[249631]: 2026-01-05 15:08:18.608132577 +0000 UTC m=+0.089194200 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:08:19 compute-0 nova_compute[185474]: 2026-01-05 15:08:19.287 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:21 compute-0 podman[249672]: 2026-01-05 15:08:21.64037127 +0000 UTC m=+0.119702056 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, vcs-type=git, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0)
Jan 05 15:08:23 compute-0 nova_compute[185474]: 2026-01-05 15:08:23.209 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:24 compute-0 nova_compute[185474]: 2026-01-05 15:08:24.290 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:28 compute-0 nova_compute[185474]: 2026-01-05 15:08:28.214 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:29 compute-0 nova_compute[185474]: 2026-01-05 15:08:29.293 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:29 compute-0 podman[201880]: time="2026-01-05T15:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:08:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:08:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3908 "" "Go-http-client/1.1"
Jan 05 15:08:30 compute-0 ovn_controller[97763]: 2026-01-05T15:08:30Z|00065|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 05 15:08:31 compute-0 openstack_network_exporter[205179]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:08:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:08:31 compute-0 openstack_network_exporter[205179]: ERROR   15:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:08:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:08:31 compute-0 podman[249692]: 2026-01-05 15:08:31.612822399 +0000 UTC m=+0.089993441 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 15:08:33 compute-0 nova_compute[185474]: 2026-01-05 15:08:33.218 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:34 compute-0 nova_compute[185474]: 2026-01-05 15:08:34.297 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.757 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.757 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.757 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.761 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.762 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.764 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{'disk.device.write.latency': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.766 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.767 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.767 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.768 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.769 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.770 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.771 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.771 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.772 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.772 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.773 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.774 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.775 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.779 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.780 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:08:37.785 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:08:37 compute-0 nova_compute[185474]: 2026-01-05 15:08:37.929 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:38 compute-0 nova_compute[185474]: 2026-01-05 15:08:38.221 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:38 compute-0 podman[249711]: 2026-01-05 15:08:38.620557432 +0000 UTC m=+0.105258969 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 15:08:39 compute-0 nova_compute[185474]: 2026-01-05 15:08:39.300 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:40 compute-0 nova_compute[185474]: 2026-01-05 15:08:40.310 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:40 compute-0 nova_compute[185474]: 2026-01-05 15:08:40.361 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:40 compute-0 nova_compute[185474]: 2026-01-05 15:08:40.703 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:41 compute-0 nova_compute[185474]: 2026-01-05 15:08:41.378 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:41 compute-0 podman[249731]: 2026-01-05 15:08:41.61787685 +0000 UTC m=+0.091479610 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:08:41 compute-0 podman[249732]: 2026-01-05 15:08:41.647618697 +0000 UTC m=+0.115614337 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 05 15:08:41 compute-0 podman[249733]: 2026-01-05 15:08:41.652838957 +0000 UTC m=+0.126104388 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 05 15:08:42 compute-0 nova_compute[185474]: 2026-01-05 15:08:42.339 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:43 compute-0 nova_compute[185474]: 2026-01-05 15:08:43.224 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:43 compute-0 nova_compute[185474]: 2026-01-05 15:08:43.589 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:44 compute-0 nova_compute[185474]: 2026-01-05 15:08:44.303 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:44.827 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:44.828 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:08:44.829 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:47 compute-0 nova_compute[185474]: 2026-01-05 15:08:47.758 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:48 compute-0 nova_compute[185474]: 2026-01-05 15:08:48.227 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:49 compute-0 nova_compute[185474]: 2026-01-05 15:08:49.306 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:49 compute-0 podman[249796]: 2026-01-05 15:08:49.614307498 +0000 UTC m=+0.101598571 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 15:08:49 compute-0 podman[249795]: 2026-01-05 15:08:49.623937675 +0000 UTC m=+0.109940304 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:08:50 compute-0 nova_compute[185474]: 2026-01-05 15:08:50.296 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:51 compute-0 nova_compute[185474]: 2026-01-05 15:08:51.166 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:51 compute-0 nova_compute[185474]: 2026-01-05 15:08:51.540 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:51 compute-0 nova_compute[185474]: 2026-01-05 15:08:51.797 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:52 compute-0 podman[249836]: 2026-01-05 15:08:52.602638086 +0000 UTC m=+0.089377045 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 15:08:53 compute-0 nova_compute[185474]: 2026-01-05 15:08:53.232 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:53 compute-0 nova_compute[185474]: 2026-01-05 15:08:53.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:53 compute-0 nova_compute[185474]: 2026-01-05 15:08:53.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:53 compute-0 nova_compute[185474]: 2026-01-05 15:08:53.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.184 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.185 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.308 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.368 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.975 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.976 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.986 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 15:08:54 compute-0 nova_compute[185474]: 2026-01-05 15:08:54.986 185478 INFO nova.compute.claims [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Claim successful on node compute-0.ctlplane.example.com
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.287 185478 DEBUG nova.compute.provider_tree [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.303 185478 DEBUG nova.scheduler.client.report [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.327 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.329 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.376 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.376 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.396 185478 INFO nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.425 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.425 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.426 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.426 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.427 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.543 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.545 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.546 185478 INFO nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Creating image(s)
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.547 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.548 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.549 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.549 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.550 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.846 185478 DEBUG nova.policy [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b1c84f20ffdd429d9965ed731c086635', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.867 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.868 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5372MB free_disk=72.41830825805664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.869 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:55 compute-0 nova_compute[185474]: 2026-01-05 15:08:55.869 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.039 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.040 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.040 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.097 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.116 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.146 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.147 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.760 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.760 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:56 compute-0 nova_compute[185474]: 2026-01-05 15:08:56.781 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.062 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.063 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.077 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.078 185478 INFO nova.compute.claims [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Claim successful on node compute-0.ctlplane.example.com
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.148 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.439 185478 DEBUG nova.compute.provider_tree [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.461 185478 DEBUG nova.scheduler.client.report [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.495 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.496 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.542 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.542 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.565 185478 INFO nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.580 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.622 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.716 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.718 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.718 185478 INFO nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Creating image(s)
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.719 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.719 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.719 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.720 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.722 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.part --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.722 185478 DEBUG nova.virt.images [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] e22fea2c-125b-4347-8d96-267cb6a6831b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.723 185478 DEBUG nova.privsep.utils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.723 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.part /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:57 compute-0 nova_compute[185474]: 2026-01-05 15:08:57.919 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Successfully created port: 5d68d02c-7204-4217-adec-1d5b6f2fc0be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.003 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.part /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.converted" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.008 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.086 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7.converted --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.088 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.099 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.100 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.111 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.136 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.159 185478 DEBUG nova.policy [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dbda6f7f58004adf93ccce9df032cbbb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '678014b38c6f4f25a192ebc53f68039f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.207 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.208 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.209 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.219 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.237 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.239 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.240 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.283 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.284 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.430 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk 1073741824" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.432 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.433 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.450 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.464 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.506 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.507 185478 DEBUG nova.virt.disk.api [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Checking if we can resize image /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.508 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.527 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.529 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.567 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.568 185478 DEBUG nova.virt.disk.api [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Cannot resize image /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.569 185478 DEBUG nova.objects.instance [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.696 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk 1073741824" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.697 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.698 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.728 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.730 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Ensure instance console log exists: /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.731 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.731 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.732 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.804 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.805 185478 DEBUG nova.virt.disk.api [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Checking if we can resize image /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.805 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.883 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.884 185478 DEBUG nova.virt.disk.api [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Cannot resize image /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.885 185478 DEBUG nova.objects.instance [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lazy-loading 'migration_context' on Instance uuid b609148c-bafc-4084-9491-68114aa80c67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.929 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.930 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Ensure instance console log exists: /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.931 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.931 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:08:58 compute-0 nova_compute[185474]: 2026-01-05 15:08:58.932 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.311 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.606 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.607 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 05 15:08:59 compute-0 nova_compute[185474]: 2026-01-05 15:08:59.608 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 15:08:59 compute-0 podman[201880]: time="2026-01-05T15:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:08:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 15:08:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3914 "" "Go-http-client/1.1"
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.386 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Successfully updated port: 5d68d02c-7204-4217-adec-1d5b6f2fc0be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.569 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.569 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquired lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.569 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:09:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:00.665 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.666 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:00.667 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:09:00 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:00.668 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:00 compute-0 nova_compute[185474]: 2026-01-05 15:09:00.882 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:09:01 compute-0 openstack_network_exporter[205179]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:09:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:09:01 compute-0 openstack_network_exporter[205179]: ERROR   15:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:09:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.291 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Successfully created port: fae4cff5-7c84-4731-9afc-a8de3de83750 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 15:09:02 compute-0 podman[249900]: 2026-01-05 15:09:02.578945167 +0000 UTC m=+0.072040480 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.721 185478 DEBUG nova.compute.manager [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-changed-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.722 185478 DEBUG nova.compute.manager [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Refreshing instance network info cache due to event network-changed-5d68d02c-7204-4217-adec-1d5b6f2fc0be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.722 185478 DEBUG oslo_concurrency.lockutils [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.888 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.889 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:02 compute-0 nova_compute[185474]: 2026-01-05 15:09:02.910 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.087 185478 DEBUG nova.network.neutron [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.092 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.093 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.104 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.105 185478 INFO nova.compute.claims [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Claim successful on node compute-0.ctlplane.example.com
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.113 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Releasing lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.113 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance network_info: |[{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.115 185478 DEBUG oslo_concurrency.lockutils [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.116 185478 DEBUG nova.network.neutron [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Refreshing network info cache for port 5d68d02c-7204-4217-adec-1d5b6f2fc0be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.119 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start _get_guest_xml network_info=[{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.129 185478 WARNING nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.140 185478 DEBUG nova.virt.libvirt.host [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.142 185478 DEBUG nova.virt.libvirt.host [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.158 185478 DEBUG nova.virt.libvirt.host [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.159 185478 DEBUG nova.virt.libvirt.host [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.160 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.160 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.161 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.161 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.162 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.162 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.163 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.163 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.164 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.164 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.165 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.165 185478 DEBUG nova.virt.hardware [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.170 185478 DEBUG nova.virt.libvirt.vif [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:08:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.171 185478 DEBUG nova.network.os_vif_util [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.172 185478 DEBUG nova.network.os_vif_util [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.173 185478 DEBUG nova.objects.instance [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.198 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <uuid>9f321f76-b34e-4ad0-b6c4-285f4470baa0</uuid>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <name>instance-00000006</name>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:name>tempest-ServerActionsTestJSON-server-864778593</nova:name>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:09:03</nova:creationTime>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:user uuid="b1c84f20ffdd429d9965ed731c086635">tempest-ServerActionsTestJSON-292757575-project-member</nova:user>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:project uuid="23dc0aab10ca466cb1b268ba1c456ac1">tempest-ServerActionsTestJSON-292757575</nova:project>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         <nova:port uuid="5d68d02c-7204-4217-adec-1d5b6f2fc0be">
Jan 05 15:09:03 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <system>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="serial">9f321f76-b34e-4ad0-b6c4-285f4470baa0</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="uuid">9f321f76-b34e-4ad0-b6c4-285f4470baa0</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </system>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <os>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </os>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <features>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </features>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:4d:dc:0e"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <target dev="tap5d68d02c-72"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/console.log" append="off"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <video>
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </video>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:09:03 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:09:03 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:09:03 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:09:03 compute-0 nova_compute[185474]: </domain>
Jan 05 15:09:03 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.198 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Preparing to wait for external event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.199 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.199 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.199 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.200 185478 DEBUG nova.virt.libvirt.vif [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:08:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.200 185478 DEBUG nova.network.os_vif_util [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.200 185478 DEBUG nova.network.os_vif_util [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.201 185478 DEBUG os_vif [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.201 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.201 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.202 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.206 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.207 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d68d02c-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.207 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d68d02c-72, col_values=(('external_ids', {'iface-id': '5d68d02c-7204-4217-adec-1d5b6f2fc0be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:dc:0e', 'vm-uuid': '9f321f76-b34e-4ad0-b6c4-285f4470baa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.209 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:03 compute-0 NetworkManager[56139]: <info>  [1767625743.2106] manager: (tap5d68d02c-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.212 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.219 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.220 185478 INFO os_vif [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72')
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.407 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.407 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.409 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] No VIF found with MAC fa:16:3e:4d:dc:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.410 185478 INFO nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Using config drive
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.455 185478 DEBUG nova.compute.provider_tree [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.473 185478 DEBUG nova.scheduler.client.report [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.497 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.498 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.553 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.554 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.574 185478 INFO nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.600 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.739 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.741 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.742 185478 INFO nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Creating image(s)
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.744 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.744 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.746 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.771 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.863 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.864 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.866 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.879 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.912 185478 DEBUG nova.policy [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b3646be802e34810b0e66c68a88a3e3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c91575382ac0488994f8b0a9212854c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.926 185478 INFO nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Creating config drive at /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.937 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt948574c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.964 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:03 compute-0 nova_compute[185474]: 2026-01-05 15:09:03.965 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.013 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.015 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.016 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.084 185478 DEBUG oslo_concurrency.processutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt948574c" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.095 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.097 185478 DEBUG nova.virt.disk.api [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Checking if we can resize image /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.097 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.154 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.155 185478 DEBUG nova.virt.disk.api [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Cannot resize image /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.155 185478 DEBUG nova.objects.instance [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lazy-loading 'migration_context' on Instance uuid e8f3f84a-a594-43d9-bab3-0c34ae22eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:04 compute-0 kernel: tap5d68d02c-72: entered promiscuous mode
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.1798] manager: (tap5d68d02c-72): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.179 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.179 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Ensure instance console log exists: /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.180 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.181 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:04 compute-0 ovn_controller[97763]: 2026-01-05T15:09:04Z|00066|binding|INFO|Claiming lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be for this chassis.
Jan 05 15:09:04 compute-0 ovn_controller[97763]: 2026-01-05T15:09:04Z|00067|binding|INFO|5d68d02c-7204-4217-adec-1d5b6f2fc0be: Claiming fa:16:3e:4d:dc:0e 10.100.0.13
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.181 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.182 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.192 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:dc:0e 10.100.0.13'], port_security=['fa:16:3e:4d:dc:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f321f76-b34e-4ad0-b6c4-285f4470baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7313966f-87a0-413c-b336-702cd552f4fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '347728ff-d8cb-45fb-b3a1-665f18a6be0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7084d359-9113-48e1-9593-68ec04f6720b, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=5d68d02c-7204-4217-adec-1d5b6f2fc0be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.194 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 5d68d02c-7204-4217-adec-1d5b6f2fc0be in datapath 7313966f-87a0-413c-b336-702cd552f4fe bound to our chassis
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.196 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:09:04 compute-0 ovn_controller[97763]: 2026-01-05T15:09:04Z|00068|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be ovn-installed in OVS
Jan 05 15:09:04 compute-0 ovn_controller[97763]: 2026-01-05T15:09:04Z|00069|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be up in Southbound
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.208 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.210 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[e13f8e1e-ba6f-4ee6-b3f3-0e371e07b7c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.211 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7313966f-81 in ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.214 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7313966f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.214 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f0dd42-90ef-4deb-bfac-ad25a3019e44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.214 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.216 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb11388-5630-4e0d-97f8-405831247e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.229 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[2c572cd1-034a-400d-8a8e-b8a84bfdad9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 systemd-udevd[249956]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:04 compute-0 systemd-machined[156786]: New machine qemu-6-instance-00000006.
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.254 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c4306c-5f79-4f9d-a1dd-f9bc3a5234f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.2595] device (tap5d68d02c-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:09:04 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.2651] device (tap5d68d02c-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.287 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[c23a1a41-e067-4307-b025-0c0dce821a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.296 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[8af82bd4-05b5-4e4a-8dff-51ee09f43710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 systemd-udevd[249959]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.2977] manager: (tap7313966f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.313 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.348 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5f266a-7741-454a-9ebf-3f1fe3d72e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.350 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1b0f47-c73a-4620-b551-b0b6eb712957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.3755] device (tap7313966f-80): carrier: link connected
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.380 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5e4d38-969f-438c-a100-049ecce90783]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.407 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7fbc71-36f3-42d8-a30b-46dc804d1e0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7313966f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:df:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507131, 'reachable_time': 17760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249986, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.431 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[126194ba-6e33-482c-bbe9-6b2bafdee327]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:df96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507131, 'tstamp': 507131}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249987, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.452 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf4b161-d1b2-433f-95a4-ed424bd199b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7313966f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:df:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507131, 'reachable_time': 17760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249988, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.492 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c7051264-42b1-428d-a599-5b1516ea38b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.576 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcd68c3-c3ec-4abe-8ddf-efc2ff50b0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.579 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7313966f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.580 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.581 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7313966f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.584 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 kernel: tap7313966f-80: entered promiscuous mode
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.587 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 NetworkManager[56139]: <info>  [1767625744.5885] manager: (tap7313966f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.588 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7313966f-80, col_values=(('external_ids', {'iface-id': '707d34b3-bc8b-4c2e-8e88-017cd6da92d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.590 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_controller[97763]: 2026-01-05T15:09:04Z|00070|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.605 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.606 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.608 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[fb02bdf0-c6bb-4afe-9a52-b7ee2e381691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.610 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:09:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:04.611 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'env', 'PROCESS_TAG=haproxy-7313966f-87a0-413c-b336-702cd552f4fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7313966f-87a0-413c-b336-702cd552f4fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.674 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625744.6738353, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.675 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Started (Lifecycle Event)
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.704 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.713 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625744.6739671, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.714 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Paused (Lifecycle Event)
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.737 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.746 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.765 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:04 compute-0 nova_compute[185474]: 2026-01-05 15:09:04.993 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Successfully updated port: fae4cff5-7c84-4731-9afc-a8de3de83750 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 15:09:05 compute-0 nova_compute[185474]: 2026-01-05 15:09:05.014 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:05 compute-0 nova_compute[185474]: 2026-01-05 15:09:05.015 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquired lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:05 compute-0 nova_compute[185474]: 2026-01-05 15:09:05.015 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:09:05 compute-0 podman[250026]: 2026-01-05 15:09:05.145565132 +0000 UTC m=+0.082854650 container create 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:09:05 compute-0 podman[250026]: 2026-01-05 15:09:05.101348098 +0000 UTC m=+0.038637716 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:09:05 compute-0 systemd[1]: Started libpod-conmon-63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f.scope.
Jan 05 15:09:05 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ad3981894a8c62cfb7f01da37b4fd97ed67942e64740be9c14e9a239b4e893/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:09:05 compute-0 podman[250026]: 2026-01-05 15:09:05.277355131 +0000 UTC m=+0.214644729 container init 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 15:09:05 compute-0 podman[250026]: 2026-01-05 15:09:05.289928978 +0000 UTC m=+0.227218536 container start 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 05 15:09:05 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [NOTICE]   (250046) : New worker (250048) forked
Jan 05 15:09:05 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [NOTICE]   (250046) : Loading success.
Jan 05 15:09:05 compute-0 nova_compute[185474]: 2026-01-05 15:09:05.646 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.220 185478 DEBUG nova.network.neutron [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updated VIF entry in instance network info cache for port 5d68d02c-7204-4217-adec-1d5b6f2fc0be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.221 185478 DEBUG nova.network.neutron [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.252 185478 DEBUG oslo_concurrency.lockutils [req-1e67e6b4-510d-4f8e-86bc-bd5647b592d7 req-d9bace2f-f987-43fe-8817-f828d9c84347 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.290 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Successfully created port: b2305559-518c-443d-8e89-66e8c7533280 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.875 185478 DEBUG nova.network.neutron [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Updating instance_info_cache with network_info: [{"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.904 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Releasing lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.905 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Instance network_info: |[{"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.909 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Start _get_guest_xml network_info=[{"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.922 185478 WARNING nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:06 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.934 185478 DEBUG nova.virt.libvirt.host [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.936 185478 DEBUG nova.virt.libvirt.host [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.944 185478 DEBUG nova.virt.libvirt.host [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.947 185478 DEBUG nova.virt.libvirt.host [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.948 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.949 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.950 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.951 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.952 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.953 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.954 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.955 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.956 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.956 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.957 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.957 185478 DEBUG nova.virt.hardware [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.962 185478 DEBUG nova.virt.libvirt.vif [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1218735485',display_name='tempest-ServerAddressesTestJSON-server-1218735485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1218735485',id=7,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='678014b38c6f4f25a192ebc53f68039f',ramdisk_id='',reservation_id='r-m13rgl55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1009038128',owner_user_name='tempest-ServerAddressesTestJSON-1009038128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:08:57Z,user_data=None,user_id='dbda6f7f58004adf93ccce9df032cbbb',uuid=b609148c-bafc-4084-9491-68114aa80c67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.963 185478 DEBUG nova.network.os_vif_util [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converting VIF {"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.964 185478 DEBUG nova.network.os_vif_util [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.965 185478 DEBUG nova.objects.instance [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lazy-loading 'pci_devices' on Instance uuid b609148c-bafc-4084-9491-68114aa80c67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:06 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.988 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <uuid>b609148c-bafc-4084-9491-68114aa80c67</uuid>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <name>instance-00000007</name>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:name>tempest-ServerAddressesTestJSON-server-1218735485</nova:name>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:09:06</nova:creationTime>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:user uuid="dbda6f7f58004adf93ccce9df032cbbb">tempest-ServerAddressesTestJSON-1009038128-project-member</nova:user>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:project uuid="678014b38c6f4f25a192ebc53f68039f">tempest-ServerAddressesTestJSON-1009038128</nova:project>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         <nova:port uuid="fae4cff5-7c84-4731-9afc-a8de3de83750">
Jan 05 15:09:06 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <system>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="serial">b609148c-bafc-4084-9491-68114aa80c67</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="uuid">b609148c-bafc-4084-9491-68114aa80c67</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </system>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <os>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </os>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <features>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </features>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.config"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:94:d5:32"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <target dev="tapfae4cff5-7c"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/console.log" append="off"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <video>
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </video>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:09:06 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:09:06 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:09:06 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:09:06 compute-0 nova_compute[185474]: </domain>
Jan 05 15:09:06 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.989 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Preparing to wait for external event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.989 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.990 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.990 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.991 185478 DEBUG nova.virt.libvirt.vif [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1218735485',display_name='tempest-ServerAddressesTestJSON-server-1218735485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1218735485',id=7,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='678014b38c6f4f25a192ebc53f68039f',ramdisk_id='',reservation_id='r-m13rgl55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1009038128',owner_user_name='tempest-ServerAddressesTestJSON-1009038128-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:08:57Z,user_data=None,user_id='dbda6f7f58004adf93ccce9df032cbbb',uuid=b609148c-bafc-4084-9491-68114aa80c67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.991 185478 DEBUG nova.network.os_vif_util [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converting VIF {"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.992 185478 DEBUG nova.network.os_vif_util [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.993 185478 DEBUG os_vif [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.994 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.994 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.995 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.998 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.998 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfae4cff5-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:06 compute-0 nova_compute[185474]: 2026-01-05 15:09:06.998 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfae4cff5-7c, col_values=(('external_ids', {'iface-id': 'fae4cff5-7c84-4731-9afc-a8de3de83750', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:d5:32', 'vm-uuid': 'b609148c-bafc-4084-9491-68114aa80c67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.001 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.002 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:09:07 compute-0 NetworkManager[56139]: <info>  [1767625747.0032] manager: (tapfae4cff5-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.015 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.019 185478 INFO os_vif [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c')
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.075 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.075 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.075 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] No VIF found with MAC fa:16:3e:94:d5:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.076 185478 INFO nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Using config drive
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.677 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Successfully updated port: b2305559-518c-443d-8e89-66e8c7533280 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.686 185478 INFO nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Creating config drive at /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.config
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.692 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvlojkv0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.710 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.710 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquired lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.710 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.745 185478 DEBUG nova.compute.manager [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-changed-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.745 185478 DEBUG nova.compute.manager [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Refreshing instance network info cache due to event network-changed-fae4cff5-7c84-4731-9afc-a8de3de83750. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.745 185478 DEBUG oslo_concurrency.lockutils [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.746 185478 DEBUG oslo_concurrency.lockutils [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.746 185478 DEBUG nova.network.neutron [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Refreshing network info cache for port fae4cff5-7c84-4731-9afc-a8de3de83750 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.817 185478 DEBUG oslo_concurrency.processutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvlojkv0z" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:07 compute-0 kernel: tapfae4cff5-7c: entered promiscuous mode
Jan 05 15:09:07 compute-0 NetworkManager[56139]: <info>  [1767625747.9025] manager: (tapfae4cff5-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.905 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:07 compute-0 ovn_controller[97763]: 2026-01-05T15:09:07Z|00071|binding|INFO|Claiming lport fae4cff5-7c84-4731-9afc-a8de3de83750 for this chassis.
Jan 05 15:09:07 compute-0 ovn_controller[97763]: 2026-01-05T15:09:07Z|00072|binding|INFO|fae4cff5-7c84-4731-9afc-a8de3de83750: Claiming fa:16:3e:94:d5:32 10.100.0.10
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.918 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d5:32 10.100.0.10'], port_security=['fa:16:3e:94:d5:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b609148c-bafc-4084-9491-68114aa80c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8594a48f-0d80-4a92-87ee-40a6961e3975', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '678014b38c6f4f25a192ebc53f68039f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd15c5afa-04db-4551-8b0f-481ab4def61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=739861a9-b1d8-47b5-af70-6bb1d7a202d4, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=fae4cff5-7c84-4731-9afc-a8de3de83750) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.920 107222 INFO neutron.agent.ovn.metadata.agent [-] Port fae4cff5-7c84-4731-9afc-a8de3de83750 in datapath 8594a48f-0d80-4a92-87ee-40a6961e3975 bound to our chassis
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.924 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.922 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8594a48f-0d80-4a92-87ee-40a6961e3975
Jan 05 15:09:07 compute-0 ovn_controller[97763]: 2026-01-05T15:09:07Z|00073|binding|INFO|Setting lport fae4cff5-7c84-4731-9afc-a8de3de83750 ovn-installed in OVS
Jan 05 15:09:07 compute-0 ovn_controller[97763]: 2026-01-05T15:09:07Z|00074|binding|INFO|Setting lport fae4cff5-7c84-4731-9afc-a8de3de83750 up in Southbound
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.929 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.940 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[6a42c904-a448-49b7-a8ec-340da8f0059c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.941 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8594a48f-01 in ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:09:07 compute-0 systemd-udevd[250096]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.944 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8594a48f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.944 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c2eb9-af86-4c6c-a156-0c4d0f2b898f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.946 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[21160b7f-ad1a-4148-ab6d-142ec6a27c45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:07 compute-0 systemd-machined[156786]: New machine qemu-7-instance-00000007.
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.964 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[d08bf6da-b660-44bb-a968-b82da6d4514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:07 compute-0 NetworkManager[56139]: <info>  [1767625747.9681] device (tapfae4cff5-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:09:07 compute-0 NetworkManager[56139]: <info>  [1767625747.9690] device (tapfae4cff5-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:09:07 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 05 15:09:07 compute-0 nova_compute[185474]: 2026-01-05 15:09:07.974 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:09:07 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:07.983 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[4609c496-19b2-4164-b4dc-93efcebcdb51]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.022 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[6a64ce1b-1358-4bdd-b979-2879d976b500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 NetworkManager[56139]: <info>  [1767625748.0316] manager: (tap8594a48f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.030 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[ee833dc8-823c-4cbe-8e1f-d3eb2fdc1a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.078 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[82382568-82c6-4515-a6ba-6293dfd65da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.082 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[08bbf351-c1c9-44a0-8d96-0ade2027f417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 NetworkManager[56139]: <info>  [1767625748.1250] device (tap8594a48f-00): carrier: link connected
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.134 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[a23f825b-94f5-4513-b98b-5e3185692998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.167 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a569836c-b3e4-4bc1-a61e-75dc1da2d23d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8594a48f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:94:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507506, 'reachable_time': 41656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250129, 'error': None, 'target': 'ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.193 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[68168d98-8ebb-4d9a-90bb-b6ddf5f0aa04]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:9451'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507506, 'tstamp': 507506}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250130, 'error': None, 'target': 'ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.232 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3930c227-5fb4-4f8f-9525-9d8ef6bd8c50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8594a48f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:94:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507506, 'reachable_time': 41656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250131, 'error': None, 'target': 'ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.285 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[581a2044-a2a4-4830-9c83-f6e5a8a67048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.376 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cac33b-c3c8-4748-8be2-b91a15d927e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.378 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8594a48f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.378 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.379 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8594a48f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.382 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:08 compute-0 kernel: tap8594a48f-00: entered promiscuous mode
Jan 05 15:09:08 compute-0 NetworkManager[56139]: <info>  [1767625748.3854] manager: (tap8594a48f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.391 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8594a48f-00, col_values=(('external_ids', {'iface-id': 'dc7328d3-d992-4424-9638-c56a9b7d138d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.393 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:08 compute-0 ovn_controller[97763]: 2026-01-05T15:09:08Z|00075|binding|INFO|Releasing lport dc7328d3-d992-4424-9638-c56a9b7d138d from this chassis (sb_readonly=0)
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.395 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.398 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8594a48f-0d80-4a92-87ee-40a6961e3975.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8594a48f-0d80-4a92-87ee-40a6961e3975.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.399 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[6ede19f9-2480-45d6-a1fa-d65a421c8be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.401 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-8594a48f-0d80-4a92-87ee-40a6961e3975
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/8594a48f-0d80-4a92-87ee-40a6961e3975.pid.haproxy
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 8594a48f-0d80-4a92-87ee-40a6961e3975
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:09:08 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:08.401 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975', 'env', 'PROCESS_TAG=haproxy-8594a48f-0d80-4a92-87ee-40a6961e3975', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8594a48f-0d80-4a92-87ee-40a6961e3975.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.411 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.623 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625748.6230092, b609148c-bafc-4084-9491-68114aa80c67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.624 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] VM Started (Lifecycle Event)
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.644 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.653 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625748.6231086, b609148c-bafc-4084-9491-68114aa80c67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.654 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] VM Paused (Lifecycle Event)
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.677 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.683 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.703 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "00943943-b19d-4862-8829-45a5cc14e988" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.704 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.705 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.731 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.832 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.834 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.848 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 15:09:08 compute-0 nova_compute[185474]: 2026-01-05 15:09:08.849 185478 INFO nova.compute.claims [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Claim successful on node compute-0.ctlplane.example.com
Jan 05 15:09:08 compute-0 podman[250168]: 2026-01-05 15:09:08.900860667 +0000 UTC m=+0.075885494 container create 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 05 15:09:08 compute-0 systemd[1]: Started libpod-conmon-11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7.scope.
Jan 05 15:09:08 compute-0 podman[250168]: 2026-01-05 15:09:08.864971955 +0000 UTC m=+0.039996832 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:09:08 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2ec6b816fa6cc816b6ca8d3c0e2f37354cc89869a18ef3eca4471de9dc279f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:09:09 compute-0 podman[250168]: 2026-01-05 15:09:09.013122972 +0000 UTC m=+0.188147819 container init 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:09:09 compute-0 podman[250168]: 2026-01-05 15:09:09.024314632 +0000 UTC m=+0.199339459 container start 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.032 185478 DEBUG nova.compute.provider_tree [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:09 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [NOTICE]   (250200) : New worker (250207) forked
Jan 05 15:09:09 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [NOTICE]   (250200) : Loading success.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.049 185478 DEBUG nova.scheduler.client.report [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:09 compute-0 podman[250183]: 2026-01-05 15:09:09.064857717 +0000 UTC m=+0.100493072 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.071 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.073 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.116 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.117 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.137 185478 INFO nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.151 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.187 185478 DEBUG nova.network.neutron [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Updated VIF entry in instance network info cache for port fae4cff5-7c84-4731-9afc-a8de3de83750. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.188 185478 DEBUG nova.network.neutron [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Updating instance_info_cache with network_info: [{"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.201 185478 DEBUG oslo_concurrency.lockutils [req-35f9b2cd-6c2e-4e64-8b91-cb27e9320302 req-fd6a6d3b-f1ab-456b-8bfc-76af1ab3946a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-b609148c-bafc-4084-9491-68114aa80c67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.238 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.240 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.240 185478 INFO nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Creating image(s)
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.241 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.242 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.243 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.260 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.318 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.324 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.325 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.325 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.340 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.358 185478 DEBUG nova.policy [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f2d114b57ba04fe69b1c1c673fb3da52', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '47a5a3a457584254b36f5f2118cf6568', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.400 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.401 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.446 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.447 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.448 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.512 185478 DEBUG nova.compute.manager [req-f454f1ae-3cfd-45b3-b9d0-e1f168ac3fcd req-fa9d028a-f8f5-45a5-86d6-c41ae25f2407 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.512 185478 DEBUG oslo_concurrency.lockutils [req-f454f1ae-3cfd-45b3-b9d0-e1f168ac3fcd req-fa9d028a-f8f5-45a5-86d6-c41ae25f2407 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.513 185478 DEBUG oslo_concurrency.lockutils [req-f454f1ae-3cfd-45b3-b9d0-e1f168ac3fcd req-fa9d028a-f8f5-45a5-86d6-c41ae25f2407 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.513 185478 DEBUG oslo_concurrency.lockutils [req-f454f1ae-3cfd-45b3-b9d0-e1f168ac3fcd req-fa9d028a-f8f5-45a5-86d6-c41ae25f2407 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.514 185478 DEBUG nova.compute.manager [req-f454f1ae-3cfd-45b3-b9d0-e1f168ac3fcd req-fa9d028a-f8f5-45a5-86d6-c41ae25f2407 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Processing event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.515 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.531 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625749.5309024, b609148c-bafc-4084-9491-68114aa80c67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.532 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] VM Resumed (Lifecycle Event)
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.534 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.538 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.539 185478 DEBUG nova.virt.disk.api [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Checking if we can resize image /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.539 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.556 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.560 185478 INFO nova.virt.libvirt.driver [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] Instance spawned successfully.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.561 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.566 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.590 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.591 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.592 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.593 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.593 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.594 185478 DEBUG nova.virt.libvirt.driver [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.597 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.608 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.609 185478 DEBUG nova.virt.disk.api [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Cannot resize image /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.609 185478 DEBUG nova.objects.instance [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lazy-loading 'migration_context' on Instance uuid 00943943-b19d-4862-8829-45a5cc14e988 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.633 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.634 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Ensure instance console log exists: /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.635 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.635 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.636 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.653 185478 INFO nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Took 11.94 seconds to spawn the instance on the hypervisor.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.653 185478 DEBUG nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.719 185478 INFO nova.compute.manager [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Took 12.87 seconds to build instance.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.748 185478 DEBUG oslo_concurrency.lockutils [None req-ac8ae59e-7229-43a4-92c0-e906e52f385b dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.802 185478 DEBUG nova.network.neutron [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updating instance_info_cache with network_info: [{"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.825 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Releasing lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.826 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Instance network_info: |[{"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.829 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Start _get_guest_xml network_info=[{"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.839 185478 WARNING nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.850 185478 DEBUG nova.virt.libvirt.host [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.852 185478 DEBUG nova.virt.libvirt.host [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.858 185478 DEBUG nova.virt.libvirt.host [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.860 185478 DEBUG nova.virt.libvirt.host [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.861 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.861 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.863 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.864 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.864 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.865 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.866 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.866 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.867 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.868 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.869 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.869 185478 DEBUG nova.virt.hardware [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.874 185478 DEBUG nova.virt.libvirt.vif [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-93055923',display_name='tempest-ServersTestJSON-server-93055923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-93055923',id=8,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOKjKHxMzV9IKMXtRWphl2b40AbYPZvQPMxhHq7kTAe84zAbR8ZtG9PfDS/YYxPSKki8zjxJTK+0AAWxpbY+SQ9Ib05RnnMnYmgv8LIGU89QZlVYEuk8pJyOC9BJ2NWKyA==',key_name='tempest-keypair-664545898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c91575382ac0488994f8b0a9212854c9',ramdisk_id='',reservation_id='r-li46x666',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-654130884',owner_user_name='tempest-ServersTestJSON-654130884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b3646be802e34810b0e66c68a88a3e3b',uuid=e8f3f84a-a594-43d9-bab3-0c34ae22eb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.876 185478 DEBUG nova.network.os_vif_util [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converting VIF {"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.877 185478 DEBUG nova.network.os_vif_util [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.879 185478 DEBUG nova.objects.instance [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8f3f84a-a594-43d9-bab3-0c34ae22eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.894 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <uuid>e8f3f84a-a594-43d9-bab3-0c34ae22eb35</uuid>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <name>instance-00000008</name>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:name>tempest-ServersTestJSON-server-93055923</nova:name>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:09:09</nova:creationTime>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:user uuid="b3646be802e34810b0e66c68a88a3e3b">tempest-ServersTestJSON-654130884-project-member</nova:user>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:project uuid="c91575382ac0488994f8b0a9212854c9">tempest-ServersTestJSON-654130884</nova:project>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         <nova:port uuid="b2305559-518c-443d-8e89-66e8c7533280">
Jan 05 15:09:09 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <system>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="serial">e8f3f84a-a594-43d9-bab3-0c34ae22eb35</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="uuid">e8f3f84a-a594-43d9-bab3-0c34ae22eb35</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </system>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <os>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </os>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <features>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </features>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.config"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:6a:b3:81"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <target dev="tapb2305559-51"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/console.log" append="off"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <video>
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </video>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:09:09 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:09:09 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:09:09 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:09:09 compute-0 nova_compute[185474]: </domain>
Jan 05 15:09:09 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.896 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Preparing to wait for external event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.897 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.897 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.897 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.898 185478 DEBUG nova.virt.libvirt.vif [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-93055923',display_name='tempest-ServersTestJSON-server-93055923',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-93055923',id=8,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOKjKHxMzV9IKMXtRWphl2b40AbYPZvQPMxhHq7kTAe84zAbR8ZtG9PfDS/YYxPSKki8zjxJTK+0AAWxpbY+SQ9Ib05RnnMnYmgv8LIGU89QZlVYEuk8pJyOC9BJ2NWKyA==',key_name='tempest-keypair-664545898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c91575382ac0488994f8b0a9212854c9',ramdisk_id='',reservation_id='r-li46x666',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-654130884',owner_user_name='tempest-ServersTestJSON-654130884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b3646be802e34810b0e66c68a88a3e3b',uuid=e8f3f84a-a594-43d9-bab3-0c34ae22eb35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.898 185478 DEBUG nova.network.os_vif_util [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converting VIF {"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.899 185478 DEBUG nova.network.os_vif_util [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.899 185478 DEBUG os_vif [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.899 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.900 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.900 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.904 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.904 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2305559-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.905 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2305559-51, col_values=(('external_ids', {'iface-id': 'b2305559-518c-443d-8e89-66e8c7533280', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:b3:81', 'vm-uuid': 'e8f3f84a-a594-43d9-bab3-0c34ae22eb35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.907 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.908 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:09:09 compute-0 NetworkManager[56139]: <info>  [1767625749.9101] manager: (tapb2305559-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.921 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.924 185478 INFO os_vif [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51')
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.997 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.997 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.998 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] No VIF found with MAC fa:16:3e:6a:b3:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 15:09:09 compute-0 nova_compute[185474]: 2026-01-05 15:09:09.998 185478 INFO nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Using config drive
Jan 05 15:09:10 compute-0 ovn_controller[97763]: 2026-01-05T15:09:10Z|00076|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:10 compute-0 ovn_controller[97763]: 2026-01-05T15:09:10Z|00077|binding|INFO|Releasing lport dc7328d3-d992-4424-9638-c56a9b7d138d from this chassis (sb_readonly=0)
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.120 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.345 185478 DEBUG nova.compute.manager [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-changed-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.346 185478 DEBUG nova.compute.manager [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Refreshing instance network info cache due to event network-changed-b2305559-518c-443d-8e89-66e8c7533280. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.347 185478 DEBUG oslo_concurrency.lockutils [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.348 185478 DEBUG oslo_concurrency.lockutils [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.349 185478 DEBUG nova.network.neutron [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Refreshing network info cache for port b2305559-518c-443d-8e89-66e8c7533280 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:10 compute-0 ovn_controller[97763]: 2026-01-05T15:09:10Z|00078|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:10 compute-0 ovn_controller[97763]: 2026-01-05T15:09:10Z|00079|binding|INFO|Releasing lport dc7328d3-d992-4424-9638-c56a9b7d138d from this chassis (sb_readonly=0)
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.402 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.477 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Successfully created port: a5cac4ea-b043-4a43-9bef-a37897937741 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.852 185478 INFO nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Creating config drive at /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.config
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.860 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjt6c4fu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:10 compute-0 nova_compute[185474]: 2026-01-05 15:09:10.987 185478 DEBUG oslo_concurrency.processutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyjt6c4fu" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:11 compute-0 kernel: tapb2305559-51: entered promiscuous mode
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.0831] manager: (tapb2305559-51): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.085 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00080|binding|INFO|Claiming lport b2305559-518c-443d-8e89-66e8c7533280 for this chassis.
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00081|binding|INFO|b2305559-518c-443d-8e89-66e8c7533280: Claiming fa:16:3e:6a:b3:81 10.100.0.5
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.092 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.096 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:b3:81 10.100.0.5'], port_security=['fa:16:3e:6a:b3:81 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e8f3f84a-a594-43d9-bab3-0c34ae22eb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c91575382ac0488994f8b0a9212854c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3848b7a3-0cba-49e5-aadb-aa2d56faf9fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb4ef76f-23a1-4112-ad2e-da98703f38a2, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=b2305559-518c-443d-8e89-66e8c7533280) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.097 107222 INFO neutron.agent.ovn.metadata.agent [-] Port b2305559-518c-443d-8e89-66e8c7533280 in datapath 789d59ac-11f1-48c0-a5bc-712b3342f5f3 bound to our chassis
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.100 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 789d59ac-11f1-48c0-a5bc-712b3342f5f3
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.113 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9a0f97-eb6e-402d-a9bf-6386e3b4c047]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.115 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap789d59ac-11 in ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:09:11 compute-0 systemd-udevd[250253]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.118 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap789d59ac-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.118 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[db138673-56d5-445d-9908-ad7e03bb11f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.120 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[e852696d-ac44-4af8-ae34-c6c6e4ea35b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.1326] device (tapb2305559-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.1331] device (tapb2305559-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.139 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[f8768640-d81a-4c2d-96bb-1455fef1c4f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 systemd-machined[156786]: New machine qemu-8-instance-00000008.
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.162 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00082|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00083|binding|INFO|Releasing lport dc7328d3-d992-4424-9638-c56a9b7d138d from this chassis (sb_readonly=0)
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.169 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbb3505-17f6-4601-98a4-d22296d095f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00084|binding|INFO|Setting lport b2305559-518c-443d-8e89-66e8c7533280 ovn-installed in OVS
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00085|binding|INFO|Setting lport b2305559-518c-443d-8e89-66e8c7533280 up in Southbound
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.177 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.208 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[8935dcc3-3c79-4d1b-9af9-ed9fda210cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.2240] manager: (tap789d59ac-10): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.222 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[b53575c8-3f1d-4d92-b69c-e04c3b97c027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.264 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[a946462c-5e62-4624-a6cb-5f2076b6b88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.268 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[4f921164-a8f7-496e-94ff-37a1ca88c7f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.2937] device (tap789d59ac-10): carrier: link connected
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.299 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca19cb2-9bf5-4f78-8bab-62fc6ad08545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.318 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba422a5-9d3e-4889-afdf-50bb6907104c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap789d59ac-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507823, 'reachable_time': 26537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250288, 'error': None, 'target': 'ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.334 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6ae067-a453-44d2-bd84-cba91d7bf70f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:ee0f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507823, 'tstamp': 507823}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250289, 'error': None, 'target': 'ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.353 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[701df5cf-895f-42da-af95-070f2955f0d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap789d59ac-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507823, 'reachable_time': 26537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250290, 'error': None, 'target': 'ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.375 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Successfully updated port: a5cac4ea-b043-4a43-9bef-a37897937741 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.399 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff4aa0f-79a9-4abb-af6b-f74dc511c6ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.403 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.403 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.404 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.485 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[8eab7ae7-bff5-49dd-8766-b7413f4c315f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.487 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap789d59ac-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.488 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.488 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap789d59ac-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.490 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 kernel: tap789d59ac-10: entered promiscuous mode
Jan 05 15:09:11 compute-0 NetworkManager[56139]: <info>  [1767625751.4923] manager: (tap789d59ac-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.495 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap789d59ac-10, col_values=(('external_ids', {'iface-id': '6927012b-4832-4a5d-ad3c-7ccc0585064b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:11 compute-0 ovn_controller[97763]: 2026-01-05T15:09:11Z|00086|binding|INFO|Releasing lport 6927012b-4832-4a5d-ad3c-7ccc0585064b from this chassis (sb_readonly=0)
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.523 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.524 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/789d59ac-11f1-48c0-a5bc-712b3342f5f3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/789d59ac-11f1-48c0-a5bc-712b3342f5f3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.525 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[323c7e3c-1d3c-49e2-adc2-2209d5351791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.526 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-789d59ac-11f1-48c0-a5bc-712b3342f5f3
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/789d59ac-11f1-48c0-a5bc-712b3342f5f3.pid.haproxy
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 789d59ac-11f1-48c0-a5bc-712b3342f5f3
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:09:11 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:11.527 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'env', 'PROCESS_TAG=haproxy-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/789d59ac-11f1-48c0-a5bc-712b3342f5f3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.583 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.587 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625751.5875776, e8f3f84a-a594-43d9-bab3-0c34ae22eb35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.588 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] VM Started (Lifecycle Event)
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.610 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.616 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625751.587723, e8f3f84a-a594-43d9-bab3-0c34ae22eb35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.616 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] VM Paused (Lifecycle Event)
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.636 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.652 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:11 compute-0 nova_compute[185474]: 2026-01-05 15:09:11.708 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:12 compute-0 podman[250327]: 2026-01-05 15:09:11.917982505 +0000 UTC m=+0.049417345 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:09:12 compute-0 podman[250327]: 2026-01-05 15:09:12.032596483 +0000 UTC m=+0.164031283 container create f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 15:09:12 compute-0 systemd[1]: Started libpod-conmon-f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539.scope.
Jan 05 15:09:12 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060be51428a8bfbeb01d6b8fe2bcb201011354b55d207e690abcdc9c2ddaf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:09:12 compute-0 podman[250339]: 2026-01-05 15:09:12.184122241 +0000 UTC m=+0.108548508 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:09:12 compute-0 podman[250327]: 2026-01-05 15:09:12.206369967 +0000 UTC m=+0.337804747 container init f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 15:09:12 compute-0 podman[250340]: 2026-01-05 15:09:12.206668705 +0000 UTC m=+0.116993514 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:09:12 compute-0 podman[250327]: 2026-01-05 15:09:12.215980904 +0000 UTC m=+0.347415664 container start f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 15:09:12 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [NOTICE]   (250407) : New worker (250409) forked
Jan 05 15:09:12 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [NOTICE]   (250407) : Loading success.
Jan 05 15:09:12 compute-0 podman[250341]: 2026-01-05 15:09:12.249929113 +0000 UTC m=+0.174158894 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.563 185478 DEBUG nova.compute.manager [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.564 185478 DEBUG oslo_concurrency.lockutils [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.564 185478 DEBUG oslo_concurrency.lockutils [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.565 185478 DEBUG oslo_concurrency.lockutils [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.566 185478 DEBUG nova.compute.manager [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] No waiting events found dispatching network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.566 185478 WARNING nova.compute.manager [req-c8f9a152-00a0-497a-882c-e9c6820b8b16 req-6e65dd93-fe80-497a-b2ae-8f944370d37c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received unexpected event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 for instance with vm_state active and task_state None.
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.812 185478 DEBUG nova.network.neutron [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.846 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.847 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Instance network_info: |[{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.850 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Start _get_guest_xml network_info=[{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.860 185478 WARNING nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.867 185478 DEBUG nova.virt.libvirt.host [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.869 185478 DEBUG nova.virt.libvirt.host [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.876 185478 DEBUG nova.virt.libvirt.host [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.878 185478 DEBUG nova.virt.libvirt.host [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.878 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.879 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.880 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.881 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.882 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.883 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.883 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.884 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.885 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.886 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.887 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.888 185478 DEBUG nova.virt.hardware [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.895 185478 DEBUG nova.virt.libvirt.vif [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2119923937',display_name='tempest-AttachInterfacesUnderV243Test-server-2119923937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2119923937',id=9,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLRWF0igOmjUqciCdcQsNqq1aoP2HXVt2yMHyHPspquCYaWxipNZGYRoqCjUoX4h1lffVsVdusNGhAqfhZ9lm8z3wYDXAD/OOHnyZ9tx3SH0v3i91uNHw2qyCkiBpGo6Hw==',key_name='tempest-keypair-349641192',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47a5a3a457584254b36f5f2118cf6568',ramdisk_id='',reservation_id='r-viu0ztax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1148358506',owner_user_name='tempest-AttachInterfacesUnderV243Test-1148358506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2d114b57ba04fe69b1c1c673fb3da52',uuid=00943943-b19d-4862-8829-45a5cc14e988,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.896 185478 DEBUG nova.network.os_vif_util [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Converting VIF {"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.897 185478 DEBUG nova.network.os_vif_util [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:a0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a5cac4ea-b043-4a43-9bef-a37897937741,network=Network(581293f8-9c7d-4afe-8455-8275f58d2374),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5cac4ea-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.899 185478 DEBUG nova.objects.instance [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lazy-loading 'pci_devices' on Instance uuid 00943943-b19d-4862-8829-45a5cc14e988 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.932 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <uuid>00943943-b19d-4862-8829-45a5cc14e988</uuid>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <name>instance-00000009</name>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-2119923937</nova:name>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:09:12</nova:creationTime>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:user uuid="f2d114b57ba04fe69b1c1c673fb3da52">tempest-AttachInterfacesUnderV243Test-1148358506-project-member</nova:user>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:project uuid="47a5a3a457584254b36f5f2118cf6568">tempest-AttachInterfacesUnderV243Test-1148358506</nova:project>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         <nova:port uuid="a5cac4ea-b043-4a43-9bef-a37897937741">
Jan 05 15:09:12 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <system>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="serial">00943943-b19d-4862-8829-45a5cc14e988</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="uuid">00943943-b19d-4862-8829-45a5cc14e988</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </system>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <os>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </os>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <features>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </features>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.config"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:cb:a0:eb"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <target dev="tapa5cac4ea-b0"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/console.log" append="off"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <video>
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </video>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:09:12 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:09:12 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:09:12 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:09:12 compute-0 nova_compute[185474]: </domain>
Jan 05 15:09:12 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.935 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Preparing to wait for external event network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.935 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "00943943-b19d-4862-8829-45a5cc14e988-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.937 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.937 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.939 185478 DEBUG nova.virt.libvirt.vif [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2119923937',display_name='tempest-AttachInterfacesUnderV243Test-server-2119923937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2119923937',id=9,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLRWF0igOmjUqciCdcQsNqq1aoP2HXVt2yMHyHPspquCYaWxipNZGYRoqCjUoX4h1lffVsVdusNGhAqfhZ9lm8z3wYDXAD/OOHnyZ9tx3SH0v3i91uNHw2qyCkiBpGo6Hw==',key_name='tempest-keypair-349641192',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47a5a3a457584254b36f5f2118cf6568',ramdisk_id='',reservation_id='r-viu0ztax',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1148358506',owner_user_name='tempest-AttachInterfacesUnderV243Test-1148358506-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f2d114b57ba04fe69b1c1c673fb3da52',uuid=00943943-b19d-4862-8829-45a5cc14e988,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.940 185478 DEBUG nova.network.os_vif_util [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Converting VIF {"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.942 185478 DEBUG nova.network.os_vif_util [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:a0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a5cac4ea-b043-4a43-9bef-a37897937741,network=Network(581293f8-9c7d-4afe-8455-8275f58d2374),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5cac4ea-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.943 185478 DEBUG os_vif [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:a0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a5cac4ea-b043-4a43-9bef-a37897937741,network=Network(581293f8-9c7d-4afe-8455-8275f58d2374),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5cac4ea-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.944 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.945 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.946 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.953 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.954 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5cac4ea-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.955 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5cac4ea-b0, col_values=(('external_ids', {'iface-id': 'a5cac4ea-b043-4a43-9bef-a37897937741', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:a0:eb', 'vm-uuid': '00943943-b19d-4862-8829-45a5cc14e988'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.958 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:12 compute-0 NetworkManager[56139]: <info>  [1767625752.9615] manager: (tapa5cac4ea-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.961 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.976 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:12 compute-0 nova_compute[185474]: 2026-01-05 15:09:12.978 185478 INFO os_vif [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:a0:eb,bridge_name='br-int',has_traffic_filtering=True,id=a5cac4ea-b043-4a43-9bef-a37897937741,network=Network(581293f8-9c7d-4afe-8455-8275f58d2374),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5cac4ea-b0')
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.080 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.081 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.082 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] No VIF found with MAC fa:16:3e:cb:a0:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.082 185478 INFO nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Using config drive
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.418 185478 DEBUG nova.network.neutron [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updated VIF entry in instance network info cache for port b2305559-518c-443d-8e89-66e8c7533280. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.419 185478 DEBUG nova.network.neutron [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updating instance_info_cache with network_info: [{"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.443 185478 DEBUG oslo_concurrency.lockutils [req-d417a471-4afe-4256-9e56-6f43708cf641 req-61beb312-d74c-41ad-8adc-af1462a3520c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.521 185478 INFO nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Creating config drive at /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.config
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.531 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnrc5q4a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.613 185478 DEBUG nova.compute.manager [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.614 185478 DEBUG nova.compute.manager [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing instance network info cache due to event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.615 185478 DEBUG oslo_concurrency.lockutils [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.616 185478 DEBUG oslo_concurrency.lockutils [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.616 185478 DEBUG nova.network.neutron [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.676 185478 DEBUG oslo_concurrency.processutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdnrc5q4a" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:13 compute-0 kernel: tapa5cac4ea-b0: entered promiscuous mode
Jan 05 15:09:13 compute-0 NetworkManager[56139]: <info>  [1767625753.7619] manager: (tapa5cac4ea-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 05 15:09:13 compute-0 ovn_controller[97763]: 2026-01-05T15:09:13Z|00087|binding|INFO|Claiming lport a5cac4ea-b043-4a43-9bef-a37897937741 for this chassis.
Jan 05 15:09:13 compute-0 ovn_controller[97763]: 2026-01-05T15:09:13Z|00088|binding|INFO|a5cac4ea-b043-4a43-9bef-a37897937741: Claiming fa:16:3e:cb:a0:eb 10.100.0.8
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.777 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:13 compute-0 NetworkManager[56139]: <info>  [1767625753.7937] device (tapa5cac4ea-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.796 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:a0:eb 10.100.0.8'], port_security=['fa:16:3e:cb:a0:eb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '00943943-b19d-4862-8829-45a5cc14e988', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-581293f8-9c7d-4afe-8455-8275f58d2374', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '47a5a3a457584254b36f5f2118cf6568', 'neutron:revision_number': '2', 'neutron:security_group_ids': '693868aa-bb86-4369-9f74-0ab1c06f142a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ac1a7422-0985-4ff5-a7e8-a666d4702cda, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=a5cac4ea-b043-4a43-9bef-a37897937741) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.797 107222 INFO neutron.agent.ovn.metadata.agent [-] Port a5cac4ea-b043-4a43-9bef-a37897937741 in datapath 581293f8-9c7d-4afe-8455-8275f58d2374 bound to our chassis
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.799 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 581293f8-9c7d-4afe-8455-8275f58d2374
Jan 05 15:09:13 compute-0 NetworkManager[56139]: <info>  [1767625753.8093] device (tapa5cac4ea-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.814 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[dabecbac-1eb9-4fee-a95b-50a29ba8fe86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.814 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap581293f8-91 in ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.817 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap581293f8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.817 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[b403ba0e-f103-4a69-b763-53608c4c0801]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.818 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[95e52c03-a107-4298-b7b3-ec579c8840c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.836 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[30c6cddc-8ebc-4263-b987-591b15a03687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 systemd-machined[156786]: New machine qemu-9-instance-00000009.
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.867 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.867 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[da3df010-440a-479b-a2b6-0a1a6a050d1b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 05 15:09:13 compute-0 ovn_controller[97763]: 2026-01-05T15:09:13Z|00089|binding|INFO|Setting lport a5cac4ea-b043-4a43-9bef-a37897937741 ovn-installed in OVS
Jan 05 15:09:13 compute-0 ovn_controller[97763]: 2026-01-05T15:09:13Z|00090|binding|INFO|Setting lport a5cac4ea-b043-4a43-9bef-a37897937741 up in Southbound
Jan 05 15:09:13 compute-0 nova_compute[185474]: 2026-01-05 15:09:13.869 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.923 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[d865796a-b6fc-41b3-a1d6-c7134d9d97a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 NetworkManager[56139]: <info>  [1767625753.9386] manager: (tap581293f8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.937 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[255157f6-92c0-4865-9d2d-a3a230e7549b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.977 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbd3f8e-b898-4c53-9881-12ee23472c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:13 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:13.981 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9b6f07-8fc3-4b3f-b366-5eb0b01c118d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 NetworkManager[56139]: <info>  [1767625754.0136] device (tap581293f8-90): carrier: link connected
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.020 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0a5eca-35cf-47e2-aad6-39ec21b43d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.038 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f68351c2-f861-4f73-8bf0-8e4b34931ba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap581293f8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:bb:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508095, 'reachable_time': 20839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250455, 'error': None, 'target': 'ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.056 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf0bc64-3601-441f-addb-36e43b868414]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:bb3c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508095, 'tstamp': 508095}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250456, 'error': None, 'target': 'ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.079 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3965bbcb-555d-41e0-ba9b-38893ccc6695]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap581293f8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:bb:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508095, 'reachable_time': 20839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250457, 'error': None, 'target': 'ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.119 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[543c057d-44ca-4f51-813b-f5aa9b5d1005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.204 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[5526c1d9-58d2-414a-8675-35db3866a53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.206 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap581293f8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.206 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.207 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap581293f8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:14 compute-0 kernel: tap581293f8-90: entered promiscuous mode
Jan 05 15:09:14 compute-0 NetworkManager[56139]: <info>  [1767625754.2144] manager: (tap581293f8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.209 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.217 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap581293f8-90, col_values=(('external_ids', {'iface-id': '02807d47-c59f-4c92-8290-7fec7d1bc7e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.221 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:14 compute-0 ovn_controller[97763]: 2026-01-05T15:09:14Z|00091|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.223 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.228 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/581293f8-9c7d-4afe-8455-8275f58d2374.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/581293f8-9c7d-4afe-8455-8275f58d2374.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.231 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[454a3d50-279f-4595-a2b8-4550af85074e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.233 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-581293f8-9c7d-4afe-8455-8275f58d2374
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/581293f8-9c7d-4afe-8455-8275f58d2374.pid.haproxy
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 581293f8-9c7d-4afe-8455-8275f58d2374
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.233 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:14 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:14.234 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374', 'env', 'PROCESS_TAG=haproxy-581293f8-9c7d-4afe-8455-8275f58d2374', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/581293f8-9c7d-4afe-8455-8275f58d2374.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.325 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.458 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625754.4583383, 00943943-b19d-4862-8829-45a5cc14e988 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.459 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] VM Started (Lifecycle Event)
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.490 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.496 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625754.4584076, 00943943-b19d-4862-8829-45a5cc14e988 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.496 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] VM Paused (Lifecycle Event)
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.524 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.529 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:14 compute-0 nova_compute[185474]: 2026-01-05 15:09:14.550 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:14 compute-0 podman[250494]: 2026-01-05 15:09:14.74343259 +0000 UTC m=+0.029867340 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:09:14 compute-0 podman[250494]: 2026-01-05 15:09:14.858324567 +0000 UTC m=+0.144759277 container create 4733265bfc6816965d980b64343d80b9ebf8e1bd7a6e816b7d8c108ba112d0dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 05 15:09:14 compute-0 systemd[1]: Started libpod-conmon-4733265bfc6816965d980b64343d80b9ebf8e1bd7a6e816b7d8c108ba112d0dc.scope.
Jan 05 15:09:14 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:09:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb43922b1b09f5c11adb8e83d5eea6b024c7bdc0a53d10777435b2e842f87258/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:09:15 compute-0 podman[250494]: 2026-01-05 15:09:15.024780355 +0000 UTC m=+0.311215155 container init 4733265bfc6816965d980b64343d80b9ebf8e1bd7a6e816b7d8c108ba112d0dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 15:09:15 compute-0 podman[250494]: 2026-01-05 15:09:15.032950743 +0000 UTC m=+0.319385493 container start 4733265bfc6816965d980b64343d80b9ebf8e1bd7a6e816b7d8c108ba112d0dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 05 15:09:15 compute-0 neutron-haproxy-ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374[250507]: [NOTICE]   (250511) : New worker (250513) forked
Jan 05 15:09:15 compute-0 neutron-haproxy-ovnmeta-581293f8-9c7d-4afe-8455-8275f58d2374[250507]: [NOTICE]   (250511) : Loading success.
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.303 185478 DEBUG nova.compute.manager [req-8ae12ca4-f59e-4698-9b1a-11df092734bf req-82dc592b-74aa-49f3-8f3c-070090d0cb67 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.305 185478 DEBUG oslo_concurrency.lockutils [req-8ae12ca4-f59e-4698-9b1a-11df092734bf req-82dc592b-74aa-49f3-8f3c-070090d0cb67 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "00943943-b19d-4862-8829-45a5cc14e988-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.305 185478 DEBUG oslo_concurrency.lockutils [req-8ae12ca4-f59e-4698-9b1a-11df092734bf req-82dc592b-74aa-49f3-8f3c-070090d0cb67 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.306 185478 DEBUG oslo_concurrency.lockutils [req-8ae12ca4-f59e-4698-9b1a-11df092734bf req-82dc592b-74aa-49f3-8f3c-070090d0cb67 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.306 185478 DEBUG nova.compute.manager [req-8ae12ca4-f59e-4698-9b1a-11df092734bf req-82dc592b-74aa-49f3-8f3c-070090d0cb67 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Processing event network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.307 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.316 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.317 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625755.3158545, 00943943-b19d-4862-8829-45a5cc14e988 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.317 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] VM Resumed (Lifecycle Event)
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.325 185478 INFO nova.virt.libvirt.driver [-] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Instance spawned successfully.
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.325 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.342 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.350 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.357 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.357 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.358 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.358 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.359 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.360 185478 DEBUG nova.virt.libvirt.driver [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.403 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.455 185478 INFO nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Took 6.22 seconds to spawn the instance on the hypervisor.
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.456 185478 DEBUG nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.552 185478 INFO nova.compute.manager [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Took 6.76 seconds to build instance.
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.583 185478 DEBUG oslo_concurrency.lockutils [None req-4b1c61fb-dab5-47b7-81be-fc1b71c9d531 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.751 185478 DEBUG nova.network.neutron [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updated VIF entry in instance network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.752 185478 DEBUG nova.network.neutron [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:15 compute-0 nova_compute[185474]: 2026-01-05 15:09:15.784 185478 DEBUG oslo_concurrency.lockutils [req-3c7f29c5-b2c2-4784-ac40-c9f6a03f83a2 req-d215f802-54a8-43d7-bcfa-5ef98c9c84d2 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.352 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.354 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.355 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.357 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.358 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.360 185478 INFO nova.compute.manager [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Terminating instance
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.362 185478 DEBUG nova.compute.manager [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:09:16 compute-0 kernel: tapfae4cff5-7c (unregistering): left promiscuous mode
Jan 05 15:09:16 compute-0 NetworkManager[56139]: <info>  [1767625756.4204] device (tapfae4cff5-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.432 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00092|binding|INFO|Releasing lport fae4cff5-7c84-4731-9afc-a8de3de83750 from this chassis (sb_readonly=0)
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00093|binding|INFO|Setting lport fae4cff5-7c84-4731-9afc-a8de3de83750 down in Southbound
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00094|binding|INFO|Removing iface tapfae4cff5-7c ovn-installed in OVS
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.442 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.455 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d5:32 10.100.0.10'], port_security=['fa:16:3e:94:d5:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b609148c-bafc-4084-9491-68114aa80c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8594a48f-0d80-4a92-87ee-40a6961e3975', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '678014b38c6f4f25a192ebc53f68039f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd15c5afa-04db-4551-8b0f-481ab4def61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=739861a9-b1d8-47b5-af70-6bb1d7a202d4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=fae4cff5-7c84-4731-9afc-a8de3de83750) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.458 107222 INFO neutron.agent.ovn.metadata.agent [-] Port fae4cff5-7c84-4731-9afc-a8de3de83750 in datapath 8594a48f-0d80-4a92-87ee-40a6961e3975 unbound from our chassis
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.462 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8594a48f-0d80-4a92-87ee-40a6961e3975, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.462 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.464 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[65850f2a-e3eb-4fb9-b276-43332bc7a2bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.465 107222 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975 namespace which is not needed anymore
Jan 05 15:09:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 05 15:09:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 7.665s CPU time.
Jan 05 15:09:16 compute-0 systemd-machined[156786]: Machine qemu-7-instance-00000007 terminated.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.524 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.525 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.525 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.526 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.526 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Processing event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.526 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.527 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.528 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.528 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.529 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.529 185478 WARNING nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state building and task_state spawning.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.530 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.530 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.531 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.531 185478 DEBUG oslo_concurrency.lockutils [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.532 185478 DEBUG nova.compute.manager [req-313f1bed-65b2-43e0-aa44-a951f57ba738 req-3dd82521-7b7a-41ea-9bc6-23510f1cb92d 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Processing event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.533 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance event wait completed in 11 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.533 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.542 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625756.5396025, e8f3f84a-a594-43d9-bab3-0c34ae22eb35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.543 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] VM Resumed (Lifecycle Event)
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.545 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.546 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.563 185478 INFO nova.virt.libvirt.driver [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance spawned successfully.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.563 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.569 185478 INFO nova.virt.libvirt.driver [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Instance spawned successfully.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.570 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.585 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:16 compute-0 kernel: tapfae4cff5-7c: entered promiscuous mode
Jan 05 15:09:16 compute-0 NetworkManager[56139]: <info>  [1767625756.5984] manager: (tapfae4cff5-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.600 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00095|binding|INFO|Claiming lport fae4cff5-7c84-4731-9afc-a8de3de83750 for this chassis.
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00096|binding|INFO|fae4cff5-7c84-4731-9afc-a8de3de83750: Claiming fa:16:3e:94:d5:32 10.100.0.10
Jan 05 15:09:16 compute-0 kernel: tapfae4cff5-7c (unregistering): left promiscuous mode
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.613 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.621 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.622 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.622 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.622 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.623 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.623 185478 DEBUG nova.virt.libvirt.driver [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.630 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d5:32 10.100.0.10'], port_security=['fa:16:3e:94:d5:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b609148c-bafc-4084-9491-68114aa80c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8594a48f-0d80-4a92-87ee-40a6961e3975', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '678014b38c6f4f25a192ebc53f68039f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd15c5afa-04db-4551-8b0f-481ab4def61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=739861a9-b1d8-47b5-af70-6bb1d7a202d4, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=fae4cff5-7c84-4731-9afc-a8de3de83750) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.638 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.638 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.638 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.639 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.639 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.639 185478 DEBUG nova.virt.libvirt.driver [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.642 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 ovn_controller[97763]: 2026-01-05T15:09:16Z|00097|binding|INFO|Releasing lport fae4cff5-7c84-4731-9afc-a8de3de83750 from this chassis (sb_readonly=0)
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.655 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.655 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625756.541436, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.655 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Resumed (Lifecycle Event)
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.676 185478 INFO nova.virt.libvirt.driver [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] Instance destroyed successfully.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.677 185478 DEBUG nova.objects.instance [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lazy-loading 'resources' on Instance uuid b609148c-bafc-4084-9491-68114aa80c67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:16 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:16.699 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:d5:32 10.100.0.10'], port_security=['fa:16:3e:94:d5:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b609148c-bafc-4084-9491-68114aa80c67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8594a48f-0d80-4a92-87ee-40a6961e3975', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '678014b38c6f4f25a192ebc53f68039f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd15c5afa-04db-4551-8b0f-481ab4def61b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=739861a9-b1d8-47b5-af70-6bb1d7a202d4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=fae4cff5-7c84-4731-9afc-a8de3de83750) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.737 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.742 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.746 185478 DEBUG nova.virt.libvirt.vif [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1218735485',display_name='tempest-ServerAddressesTestJSON-server-1218735485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1218735485',id=7,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='678014b38c6f4f25a192ebc53f68039f',ramdisk_id='',reservation_id='r-m13rgl55',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1009038128',owner_user_name='tempest-ServerAddressesTestJSON-1009038128-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T15:09:09Z,user_data=None,user_id='dbda6f7f58004adf93ccce9df032cbbb',uuid=b609148c-bafc-4084-9491-68114aa80c67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.746 185478 DEBUG nova.network.os_vif_util [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converting VIF {"id": "fae4cff5-7c84-4731-9afc-a8de3de83750", "address": "fa:16:3e:94:d5:32", "network": {"id": "8594a48f-0d80-4a92-87ee-40a6961e3975", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-277196153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "678014b38c6f4f25a192ebc53f68039f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfae4cff5-7c", "ovs_interfaceid": "fae4cff5-7c84-4731-9afc-a8de3de83750", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.747 185478 DEBUG nova.network.os_vif_util [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.748 185478 DEBUG os_vif [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.750 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.750 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfae4cff5-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.753 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.755 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.758 185478 INFO os_vif [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:d5:32,bridge_name='br-int',has_traffic_filtering=True,id=fae4cff5-7c84-4731-9afc-a8de3de83750,network=Network(8594a48f-0d80-4a92-87ee-40a6961e3975),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfae4cff5-7c')
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.759 185478 INFO nova.virt.libvirt.driver [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Deleting instance files /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67_del
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.760 185478 INFO nova.virt.libvirt.driver [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Deletion of /var/lib/nova/instances/b609148c-bafc-4084-9491-68114aa80c67_del complete
Jan 05 15:09:16 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [NOTICE]   (250200) : haproxy version is 2.8.14-c23fe91
Jan 05 15:09:16 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [NOTICE]   (250200) : path to executable is /usr/sbin/haproxy
Jan 05 15:09:16 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [WARNING]  (250200) : Exiting Master process...
Jan 05 15:09:16 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [ALERT]    (250200) : Current worker (250207) exited with code 143 (Terminated)
Jan 05 15:09:16 compute-0 neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975[250184]: [WARNING]  (250200) : All workers exited. Exiting... (0)
Jan 05 15:09:16 compute-0 systemd[1]: libpod-11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7.scope: Deactivated successfully.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.779 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:16 compute-0 podman[250551]: 2026-01-05 15:09:16.785299285 +0000 UTC m=+0.099064153 container died 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.798 185478 INFO nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Took 21.25 seconds to spawn the instance on the hypervisor.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.799 185478 DEBUG nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.810 185478 INFO nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Took 13.07 seconds to spawn the instance on the hypervisor.
Jan 05 15:09:16 compute-0 nova_compute[185474]: 2026-01-05 15:09:16.810 185478 DEBUG nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7-userdata-shm.mount: Deactivated successfully.
Jan 05 15:09:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2ec6b816fa6cc816b6ca8d3c0e2f37354cc89869a18ef3eca4471de9dc279f-merged.mount: Deactivated successfully.
Jan 05 15:09:16 compute-0 podman[250551]: 2026-01-05 15:09:16.999071169 +0000 UTC m=+0.312836007 container cleanup 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 05 15:09:17 compute-0 systemd[1]: libpod-conmon-11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7.scope: Deactivated successfully.
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.101 185478 INFO nova.compute.manager [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.102 185478 DEBUG oslo.service.loopingcall [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.105 185478 DEBUG nova.compute.manager [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.106 185478 DEBUG nova.network.neutron [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.113 185478 INFO nova.compute.manager [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Took 22.17 seconds to build instance.
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.116 185478 INFO nova.compute.manager [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Took 14.06 seconds to build instance.
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.155 185478 DEBUG oslo_concurrency.lockutils [None req-8955306d-d596-4211-a182-5cc6b0ea76cf b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:17 compute-0 podman[250581]: 2026-01-05 15:09:17.193490065 +0000 UTC m=+0.143430451 container remove 11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.203 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[70e390cc-255f-47ad-a04d-7459b5855294]: (4, ('Mon Jan  5 03:09:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975 (11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7)\n11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7\nMon Jan  5 03:09:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975 (11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7)\n11e60b49386511283c3eee55241757d2564d02e7f28f38686f8eeeb48ab472c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.205 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[bc52a6c3-cb65-46ba-9406-0b4304236e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.207 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8594a48f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:17 compute-0 kernel: tap8594a48f-00: left promiscuous mode
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.212 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.215 185478 DEBUG oslo_concurrency.lockutils [None req-05e6458f-d311-4512-8534-8f77b35686e7 b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:17 compute-0 nova_compute[185474]: 2026-01-05 15:09:17.225 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.227 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[395db5ef-d979-4537-8074-cb345a303d1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.250 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[826f8778-a4f2-4e5e-8dad-1b22d62f10ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.252 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[d1456ea8-eb24-4dd0-8939-55a41eac2f69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.276 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[dddf7cc0-7a07-47d4-83e7-572a25c23032]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507495, 'reachable_time': 44330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250595, 'error': None, 'target': 'ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d8594a48f\x2d0d80\x2d4a92\x2d87ee\x2d40a6961e3975.mount: Deactivated successfully.
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.279 107613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8594a48f-0d80-4a92-87ee-40a6961e3975 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.279 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[375bf97c-c0db-40f4-adaa-9baa5117413a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.280 107222 INFO neutron.agent.ovn.metadata.agent [-] Port fae4cff5-7c84-4731-9afc-a8de3de83750 in datapath 8594a48f-0d80-4a92-87ee-40a6961e3975 unbound from our chassis
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.283 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8594a48f-0d80-4a92-87ee-40a6961e3975, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.286 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f45895fd-f214-4248-855e-815699d7ed04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.289 107222 INFO neutron.agent.ovn.metadata.agent [-] Port fae4cff5-7c84-4731-9afc-a8de3de83750 in datapath 8594a48f-0d80-4a92-87ee-40a6961e3975 unbound from our chassis
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.293 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8594a48f-0d80-4a92-87ee-40a6961e3975, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:09:17 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:17.294 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c30dbcce-b7f7-426c-bf82-83f5e68b11c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.468 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.468 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "00943943-b19d-4862-8829-45a5cc14e988-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.468 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.468 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "00943943-b19d-4862-8829-45a5cc14e988-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.469 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] No waiting events found dispatching network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.469 185478 WARNING nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received unexpected event network-vif-plugged-a5cac4ea-b043-4a43-9bef-a37897937741 for instance with vm_state active and task_state None.
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.469 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-unplugged-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.469 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.469 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.470 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.470 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] No waiting events found dispatching network-vif-unplugged-fae4cff5-7c84-4731-9afc-a8de3de83750 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.470 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-unplugged-fae4cff5-7c84-4731-9afc-a8de3de83750 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.470 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.471 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "b609148c-bafc-4084-9491-68114aa80c67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.471 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.471 185478 DEBUG oslo_concurrency.lockutils [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.472 185478 DEBUG nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] No waiting events found dispatching network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.472 185478 WARNING nova.compute.manager [req-18c2c2bd-3ca6-49ec-8620-65c91fe0f18b req-bbed544b-3211-4eb1-98d9-8d0c9ecea6ba 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received unexpected event network-vif-plugged-fae4cff5-7c84-4731-9afc-a8de3de83750 for instance with vm_state active and task_state deleting.
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.764 185478 DEBUG nova.network.neutron [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.796 185478 INFO nova.compute.manager [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] Took 1.69 seconds to deallocate network for instance.
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.865 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:18 compute-0 nova_compute[185474]: 2026-01-05 15:09:18.866 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.016 185478 DEBUG nova.compute.provider_tree [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.042 185478 DEBUG nova.scheduler.client.report [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.073 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.104 185478 INFO nova.scheduler.client.report [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Deleted allocations for instance b609148c-bafc-4084-9491-68114aa80c67
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.172 185478 DEBUG oslo_concurrency.lockutils [None req-7aff3f0a-ac02-4fb3-8a1a-669cedc27cae dbda6f7f58004adf93ccce9df032cbbb 678014b38c6f4f25a192ebc53f68039f - - default default] Lock "b609148c-bafc-4084-9491-68114aa80c67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.330 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.381 185478 DEBUG nova.compute.manager [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.382 185478 DEBUG oslo_concurrency.lockutils [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.383 185478 DEBUG oslo_concurrency.lockutils [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.383 185478 DEBUG oslo_concurrency.lockutils [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.384 185478 DEBUG nova.compute.manager [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] No waiting events found dispatching network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:19 compute-0 nova_compute[185474]: 2026-01-05 15:09:19.384 185478 WARNING nova.compute.manager [req-9eb64b9f-271d-433f-93e9-32581160257a req-a01fa866-7b6e-4df3-bb2e-18475ab590fb 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received unexpected event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 for instance with vm_state active and task_state None.
Jan 05 15:09:20 compute-0 nova_compute[185474]: 2026-01-05 15:09:20.373 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:20 compute-0 NetworkManager[56139]: <info>  [1767625760.3756] manager: (patch-br-int-to-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 05 15:09:20 compute-0 NetworkManager[56139]: <info>  [1767625760.3767] manager: (patch-provnet-85a56a04-e0e2-48a6-a4ac-3ab4da512c67-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 05 15:09:20 compute-0 nova_compute[185474]: 2026-01-05 15:09:20.491 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:20 compute-0 ovn_controller[97763]: 2026-01-05T15:09:20Z|00098|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:20 compute-0 ovn_controller[97763]: 2026-01-05T15:09:20Z|00099|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:09:20 compute-0 ovn_controller[97763]: 2026-01-05T15:09:20Z|00100|binding|INFO|Releasing lport 6927012b-4832-4a5d-ad3c-7ccc0585064b from this chassis (sb_readonly=0)
Jan 05 15:09:20 compute-0 nova_compute[185474]: 2026-01-05 15:09:20.531 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:20 compute-0 podman[250601]: 2026-01-05 15:09:20.608145525 +0000 UTC m=+0.095000515 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:09:20 compute-0 podman[250599]: 2026-01-05 15:09:20.614180597 +0000 UTC m=+0.097836791 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.508 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.525 185478 DEBUG nova.compute.manager [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: b609148c-bafc-4084-9491-68114aa80c67] Received event network-vif-deleted-fae4cff5-7c84-4731-9afc-a8de3de83750 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.525 185478 DEBUG nova.compute.manager [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.525 185478 DEBUG nova.compute.manager [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing instance network info cache due to event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.526 185478 DEBUG oslo_concurrency.lockutils [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.526 185478 DEBUG oslo_concurrency.lockutils [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.526 185478 DEBUG nova.network.neutron [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:21 compute-0 nova_compute[185474]: 2026-01-05 15:09:21.754 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.561 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.562 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.562 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.562 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.563 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.564 185478 INFO nova.compute.manager [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Terminating instance
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.565 185478 DEBUG nova.compute.manager [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:09:23 compute-0 kernel: tapb2305559-51 (unregistering): left promiscuous mode
Jan 05 15:09:23 compute-0 NetworkManager[56139]: <info>  [1767625763.6025] device (tapb2305559-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:09:23 compute-0 ovn_controller[97763]: 2026-01-05T15:09:23Z|00101|binding|INFO|Releasing lport b2305559-518c-443d-8e89-66e8c7533280 from this chassis (sb_readonly=0)
Jan 05 15:09:23 compute-0 ovn_controller[97763]: 2026-01-05T15:09:23Z|00102|binding|INFO|Setting lport b2305559-518c-443d-8e89-66e8c7533280 down in Southbound
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.614 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 ovn_controller[97763]: 2026-01-05T15:09:23Z|00103|binding|INFO|Removing iface tapb2305559-51 ovn-installed in OVS
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.617 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:23.627 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:b3:81 10.100.0.5'], port_security=['fa:16:3e:6a:b3:81 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e8f3f84a-a594-43d9-bab3-0c34ae22eb35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c91575382ac0488994f8b0a9212854c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3848b7a3-0cba-49e5-aadb-aa2d56faf9fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb4ef76f-23a1-4112-ad2e-da98703f38a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=b2305559-518c-443d-8e89-66e8c7533280) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:23 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:23.628 107222 INFO neutron.agent.ovn.metadata.agent [-] Port b2305559-518c-443d-8e89-66e8c7533280 in datapath 789d59ac-11f1-48c0-a5bc-712b3342f5f3 unbound from our chassis
Jan 05 15:09:23 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:23.630 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 789d59ac-11f1-48c0-a5bc-712b3342f5f3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:09:23 compute-0 podman[250642]: 2026-01-05 15:09:23.631973113 +0000 UTC m=+0.121112664 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 15:09:23 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:23.631 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a07e76-fd50-4aa6-8359-c0010d8870af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:23 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:23.632 107222 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3 namespace which is not needed anymore
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.638 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 05 15:09:23 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 7.780s CPU time.
Jan 05 15:09:23 compute-0 systemd-machined[156786]: Machine qemu-8-instance-00000008 terminated.
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.683 185478 DEBUG nova.compute.manager [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-changed-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.686 185478 DEBUG nova.compute.manager [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Refreshing instance network info cache due to event network-changed-b2305559-518c-443d-8e89-66e8c7533280. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.687 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.688 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.688 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Refreshing network info cache for port b2305559-518c-443d-8e89-66e8c7533280 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [NOTICE]   (250407) : haproxy version is 2.8.14-c23fe91
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [NOTICE]   (250407) : path to executable is /usr/sbin/haproxy
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [WARNING]  (250407) : Exiting Master process...
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [WARNING]  (250407) : Exiting Master process...
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [ALERT]    (250407) : Current worker (250409) exited with code 143 (Terminated)
Jan 05 15:09:23 compute-0 neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3[250375]: [WARNING]  (250407) : All workers exited. Exiting... (0)
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.853 185478 INFO nova.virt.libvirt.driver [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Instance destroyed successfully.
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.853 185478 DEBUG nova.objects.instance [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lazy-loading 'resources' on Instance uuid e8f3f84a-a594-43d9-bab3-0c34ae22eb35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:23 compute-0 systemd[1]: libpod-f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539.scope: Deactivated successfully.
Jan 05 15:09:23 compute-0 podman[250682]: 2026-01-05 15:09:23.862111945 +0000 UTC m=+0.079702745 container died f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.867 185478 DEBUG nova.virt.libvirt.vif [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:09:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-93055923',display_name='tempest-ServersTestJSON-server-93055923',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-93055923',id=8,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOKjKHxMzV9IKMXtRWphl2b40AbYPZvQPMxhHq7kTAe84zAbR8ZtG9PfDS/YYxPSKki8zjxJTK+0AAWxpbY+SQ9Ib05RnnMnYmgv8LIGU89QZlVYEuk8pJyOC9BJ2NWKyA==',key_name='tempest-keypair-664545898',keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c91575382ac0488994f8b0a9212854c9',ramdisk_id='',reservation_id='r-li46x666',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-654130884',owner_user_name='tempest-ServersTestJSON-654130884-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T15:09:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b3646be802e34810b0e66c68a88a3e3b',uuid=e8f3f84a-a594-43d9-bab3-0c34ae22eb35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.868 185478 DEBUG nova.network.os_vif_util [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converting VIF {"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.869 185478 DEBUG nova.network.os_vif_util [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.869 185478 DEBUG os_vif [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.871 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.871 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2305559-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.873 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.875 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.880 185478 INFO os_vif [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:b3:81,bridge_name='br-int',has_traffic_filtering=True,id=b2305559-518c-443d-8e89-66e8c7533280,network=Network(789d59ac-11f1-48c0-a5bc-712b3342f5f3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2305559-51')
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.882 185478 INFO nova.virt.libvirt.driver [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Deleting instance files /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35_del
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.883 185478 INFO nova.virt.libvirt.driver [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Deletion of /var/lib/nova/instances/e8f3f84a-a594-43d9-bab3-0c34ae22eb35_del complete
Jan 05 15:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539-userdata-shm.mount: Deactivated successfully.
Jan 05 15:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8060be51428a8bfbeb01d6b8fe2bcb201011354b55d207e690abcdc9c2ddaf4-merged.mount: Deactivated successfully.
Jan 05 15:09:23 compute-0 podman[250682]: 2026-01-05 15:09:23.918413682 +0000 UTC m=+0.136004482 container cleanup f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:09:23 compute-0 systemd[1]: libpod-conmon-f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539.scope: Deactivated successfully.
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.956 185478 INFO nova.compute.manager [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.957 185478 DEBUG oslo.service.loopingcall [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.958 185478 DEBUG nova.compute.manager [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:09:23 compute-0 nova_compute[185474]: 2026-01-05 15:09:23.958 185478 DEBUG nova.network.neutron [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:09:24 compute-0 podman[250726]: 2026-01-05 15:09:24.024591925 +0000 UTC m=+0.075157193 container remove f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.032 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[22baa355-3d3c-4d07-b2ed-db828403f695]: (4, ('Mon Jan  5 03:09:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3 (f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539)\nf1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539\nMon Jan  5 03:09:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3 (f1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539)\nf1e7b694b549eafe2f4910b993439f8556608e6b0b20ea9ac0f64d221fc13539\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.035 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c8942cdc-3fb3-4210-838a-7bc895ebca7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.036 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap789d59ac-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.039 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:24 compute-0 kernel: tap789d59ac-10: left promiscuous mode
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.055 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.057 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe0c406-9cbd-4b8a-84df-1b0b038e598e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.072 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[e0819d86-8957-471e-bdc9-5330d5b15d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.075 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[497797e7-cb02-41a7-adb1-98d80659b736]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.097 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[59fc4786-3015-4f3d-96de-5a3f9b3a1d41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507814, 'reachable_time': 34534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250740, 'error': None, 'target': 'ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.100 107613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-789d59ac-11f1-48c0-a5bc-712b3342f5f3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 15:09:24 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:24.100 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[c403491c-cd17-4ed0-a592-ee359b1b3ce7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d789d59ac\x2d11f1\x2d48c0\x2da5bc\x2d712b3342f5f3.mount: Deactivated successfully.
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.331 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.777 185478 DEBUG nova.network.neutron [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updated VIF entry in instance network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.778 185478 DEBUG nova.network.neutron [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:24 compute-0 nova_compute[185474]: 2026-01-05 15:09:24.806 185478 DEBUG oslo_concurrency.lockutils [req-5cace158-e1ad-4096-8b95-4e90ac692247 req-c22c710b-5e1c-4d63-8566-d402dab51c5a 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.044 185478 DEBUG nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-unplugged-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.045 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.045 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.046 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.046 185478 DEBUG nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] No waiting events found dispatching network-vif-unplugged-b2305559-518c-443d-8e89-66e8c7533280 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.047 185478 DEBUG nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-unplugged-b2305559-518c-443d-8e89-66e8c7533280 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.047 185478 DEBUG nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.048 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.048 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.049 185478 DEBUG oslo_concurrency.lockutils [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.049 185478 DEBUG nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] No waiting events found dispatching network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:26 compute-0 nova_compute[185474]: 2026-01-05 15:09:26.050 185478 WARNING nova.compute.manager [req-1d13907b-94d2-47ea-9d27-f7b5c4a2b03d req-2ea6a866-e1b6-42f5-811a-277792a385e3 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received unexpected event network-vif-plugged-b2305559-518c-443d-8e89-66e8c7533280 for instance with vm_state active and task_state deleting.
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.179 185478 DEBUG nova.network.neutron [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.228 185478 INFO nova.compute.manager [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Took 3.27 seconds to deallocate network for instance.
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.280 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.281 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.380 185478 DEBUG nova.compute.provider_tree [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.394 185478 DEBUG nova.scheduler.client.report [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.418 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.444 185478 INFO nova.scheduler.client.report [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Deleted allocations for instance e8f3f84a-a594-43d9-bab3-0c34ae22eb35
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.541 185478 DEBUG oslo_concurrency.lockutils [None req-9ad7af0e-0e3f-4b6d-bef0-d6120d2dca3f b3646be802e34810b0e66c68a88a3e3b c91575382ac0488994f8b0a9212854c9 - - default default] Lock "e8f3f84a-a594-43d9-bab3-0c34ae22eb35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:27 compute-0 nova_compute[185474]: 2026-01-05 15:09:27.726 185478 DEBUG nova.compute.manager [req-efe9944f-0ad6-48cb-ba42-bc3ca686d1e7 req-e1de5373-8387-462c-b259-bbfa6d8d53a8 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Received event network-vif-deleted-b2305559-518c-443d-8e89-66e8c7533280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.247 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updated VIF entry in instance network info cache for port b2305559-518c-443d-8e89-66e8c7533280. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.248 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Updating instance_info_cache with network_info: [{"id": "b2305559-518c-443d-8e89-66e8c7533280", "address": "fa:16:3e:6a:b3:81", "network": {"id": "789d59ac-11f1-48c0-a5bc-712b3342f5f3", "bridge": "br-int", "label": "tempest-ServersTestJSON-329818072-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c91575382ac0488994f8b0a9212854c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2305559-51", "ovs_interfaceid": "b2305559-518c-443d-8e89-66e8c7533280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.278 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-e8f3f84a-a594-43d9-bab3-0c34ae22eb35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.279 185478 DEBUG nova.compute.manager [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-changed-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.282 185478 DEBUG nova.compute.manager [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Refreshing instance network info cache due to event network-changed-5d68d02c-7204-4217-adec-1d5b6f2fc0be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.283 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.284 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.285 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Refreshing network info cache for port 5d68d02c-7204-4217-adec-1d5b6f2fc0be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:28 compute-0 ovn_controller[97763]: 2026-01-05T15:09:28Z|00104|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:28 compute-0 ovn_controller[97763]: 2026-01-05T15:09:28Z|00105|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.504 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:28 compute-0 nova_compute[185474]: 2026-01-05 15:09:28.874 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:28 compute-0 ovn_controller[97763]: 2026-01-05T15:09:28Z|00106|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:28 compute-0 ovn_controller[97763]: 2026-01-05T15:09:28Z|00107|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:09:29 compute-0 nova_compute[185474]: 2026-01-05 15:09:29.000 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:29 compute-0 nova_compute[185474]: 2026-01-05 15:09:29.334 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:29 compute-0 podman[201880]: time="2026-01-05T15:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:09:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:09:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4843 "" "Go-http-client/1.1"
Jan 05 15:09:30 compute-0 nova_compute[185474]: 2026-01-05 15:09:30.462 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updated VIF entry in instance network info cache for port 5d68d02c-7204-4217-adec-1d5b6f2fc0be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:30 compute-0 nova_compute[185474]: 2026-01-05 15:09:30.464 185478 DEBUG nova.network.neutron [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:30 compute-0 nova_compute[185474]: 2026-01-05 15:09:30.487 185478 DEBUG oslo_concurrency.lockutils [req-a0a5b4d6-c14e-40b3-845d-ab19ce074ffc req-a4ac816f-613a-406c-a8c6-4c4ca4936f50 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:31 compute-0 openstack_network_exporter[205179]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:09:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:09:31 compute-0 openstack_network_exporter[205179]: ERROR   15:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:09:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:09:31 compute-0 nova_compute[185474]: 2026-01-05 15:09:31.780 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625756.671457, b609148c-bafc-4084-9491-68114aa80c67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:31 compute-0 nova_compute[185474]: 2026-01-05 15:09:31.781 185478 INFO nova.compute.manager [-] [instance: b609148c-bafc-4084-9491-68114aa80c67] VM Stopped (Lifecycle Event)
Jan 05 15:09:31 compute-0 nova_compute[185474]: 2026-01-05 15:09:31.808 185478 DEBUG nova.compute.manager [None req-c20a2bdf-543a-452d-9925-46a5f388d05c - - - - - -] [instance: b609148c-bafc-4084-9491-68114aa80c67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:33 compute-0 podman[250741]: 2026-01-05 15:09:33.638652778 +0000 UTC m=+0.123972591 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 15:09:33 compute-0 nova_compute[185474]: 2026-01-05 15:09:33.878 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:34 compute-0 nova_compute[185474]: 2026-01-05 15:09:34.337 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:35 compute-0 ovn_controller[97763]: 2026-01-05T15:09:35Z|00108|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:09:35 compute-0 ovn_controller[97763]: 2026-01-05T15:09:35Z|00109|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.157 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.826 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.827 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.843 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.920 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.921 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.931 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 15:09:35 compute-0 nova_compute[185474]: 2026-01-05 15:09:35.932 185478 INFO nova.compute.claims [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Claim successful on node compute-0.ctlplane.example.com
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.112 185478 DEBUG nova.compute.provider_tree [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.130 185478 DEBUG nova.scheduler.client.report [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.155 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.156 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.206 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.207 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.222 185478 INFO nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.242 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.333 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.335 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.336 185478 INFO nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Creating image(s)
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.336 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.337 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.338 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.352 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.514 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.515 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.516 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.526 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.622 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.623 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.665 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7,backing_fmt=raw /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.666 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "38b8ac6fc49be41905fc77dbe18ef48c096d20d7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.667 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.741 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.742 185478 DEBUG nova.virt.disk.api [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Checking if we can resize image /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.743 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.798 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.799 185478 DEBUG nova.virt.disk.api [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Cannot resize image /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.800 185478 DEBUG nova.objects.instance [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lazy-loading 'migration_context' on Instance uuid e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.819 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.819 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Ensure instance console log exists: /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.820 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.820 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.820 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:36 compute-0 nova_compute[185474]: 2026-01-05 15:09:36.866 185478 DEBUG nova.policy [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d883f36e32b4c71b56683d7117547d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '134d57b916be4f4ca80b3a59630701e5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 15:09:38 compute-0 nova_compute[185474]: 2026-01-05 15:09:38.847 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625763.8452775, e8f3f84a-a594-43d9-bab3-0c34ae22eb35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:38 compute-0 nova_compute[185474]: 2026-01-05 15:09:38.848 185478 INFO nova.compute.manager [-] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] VM Stopped (Lifecycle Event)
Jan 05 15:09:38 compute-0 nova_compute[185474]: 2026-01-05 15:09:38.874 185478 DEBUG nova.compute.manager [None req-839d5db2-53e5-4b4a-80a2-4a8863e304b2 - - - - - -] [instance: e8f3f84a-a594-43d9-bab3-0c34ae22eb35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:38 compute-0 nova_compute[185474]: 2026-01-05 15:09:38.883 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:39 compute-0 nova_compute[185474]: 2026-01-05 15:09:39.283 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Successfully created port: 39d7dd25-004e-46d1-b35c-19e1d39b90b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 15:09:39 compute-0 nova_compute[185474]: 2026-01-05 15:09:39.339 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:39 compute-0 podman[250773]: 2026-01-05 15:09:39.612470657 +0000 UTC m=+0.101938381 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal)
Jan 05 15:09:40 compute-0 nova_compute[185474]: 2026-01-05 15:09:40.840 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Successfully updated port: 39d7dd25-004e-46d1-b35c-19e1d39b90b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 15:09:40 compute-0 nova_compute[185474]: 2026-01-05 15:09:40.857 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:40 compute-0 nova_compute[185474]: 2026-01-05 15:09:40.858 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:40 compute-0 nova_compute[185474]: 2026-01-05 15:09:40.858 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:09:41 compute-0 nova_compute[185474]: 2026-01-05 15:09:41.070 185478 DEBUG nova.compute.manager [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:41 compute-0 nova_compute[185474]: 2026-01-05 15:09:41.071 185478 DEBUG nova.compute.manager [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing instance network info cache due to event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:41 compute-0 nova_compute[185474]: 2026-01-05 15:09:41.072 185478 DEBUG oslo_concurrency.lockutils [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:41 compute-0 nova_compute[185474]: 2026-01-05 15:09:41.294 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 15:09:42 compute-0 podman[250793]: 2026-01-05 15:09:42.633942051 +0000 UTC m=+0.120082986 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:09:42 compute-0 podman[250794]: 2026-01-05 15:09:42.634026634 +0000 UTC m=+0.107933101 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 05 15:09:42 compute-0 podman[250795]: 2026-01-05 15:09:42.676596073 +0000 UTC m=+0.154927429 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.165 185478 DEBUG nova.network.neutron [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.221 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.222 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Instance network_info: |[{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.223 185478 DEBUG oslo_concurrency.lockutils [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.223 185478 DEBUG nova.network.neutron [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.226 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Start _get_guest_xml network_info=[{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.235 185478 WARNING nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.246 185478 DEBUG nova.virt.libvirt.host [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.247 185478 DEBUG nova.virt.libvirt.host [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.251 185478 DEBUG nova.virt.libvirt.host [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.252 185478 DEBUG nova.virt.libvirt.host [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.253 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.254 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T15:08:04Z,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='54417029b2fb4b749e20754214013802',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T15:08:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.254 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.255 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.255 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.255 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.256 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.256 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.257 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.257 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.257 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.258 185478 DEBUG nova.virt.hardware [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.262 185478 DEBUG nova.virt.libvirt.vif [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-141186871',display_name='tempest-TestNetworkBasicOps-server-141186871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-141186871',id=10,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7+PHzr89/ErKG/vgzfJel9PkpQEHZeqH7T1Jbf+shgI0Q4XRhGdXXBBFloo2IeGKa1FlNtQaTgBydeEEVXqi+pm1sAFTEBKf70vSIpcARbyAP20SCqZdimFDzPUPJBYw==',key_name='tempest-TestNetworkBasicOps-1945306424',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134d57b916be4f4ca80b3a59630701e5',ramdisk_id='',reservation_id='r-zkxvm2zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-197593556',owner_user_name='tempest-TestNetworkBasicOps-197593556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:36Z,user_data=None,user_id='8d883f36e32b4c71b56683d7117547d8',uuid=e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.263 185478 DEBUG nova.network.os_vif_util [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Converting VIF {"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.264 185478 DEBUG nova.network.os_vif_util [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=39d7dd25-004e-46d1-b35c-19e1d39b90b7,network=Network(a4d9427d-0bca-46c0-aaca-aa38c0dca8a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39d7dd25-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.265 185478 DEBUG nova.objects.instance [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.280 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <uuid>e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8</uuid>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <name>instance-0000000a</name>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:name>tempest-TestNetworkBasicOps-server-141186871</nova:name>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:09:43</nova:creationTime>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:user uuid="8d883f36e32b4c71b56683d7117547d8">tempest-TestNetworkBasicOps-197593556-project-member</nova:user>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:project uuid="134d57b916be4f4ca80b3a59630701e5">tempest-TestNetworkBasicOps-197593556</nova:project>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         <nova:port uuid="39d7dd25-004e-46d1-b35c-19e1d39b90b7">
Jan 05 15:09:43 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <system>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="serial">e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="uuid">e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </system>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <os>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </os>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <features>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </features>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.config"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:d8:1f:9a"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <target dev="tap39d7dd25-00"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/console.log" append="off"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <video>
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </video>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:09:43 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:09:43 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:09:43 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:09:43 compute-0 nova_compute[185474]: </domain>
Jan 05 15:09:43 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.282 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Preparing to wait for external event network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.282 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Acquiring lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.282 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.283 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.283 185478 DEBUG nova.virt.libvirt.vif [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T15:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-141186871',display_name='tempest-TestNetworkBasicOps-server-141186871',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-141186871',id=10,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE7+PHzr89/ErKG/vgzfJel9PkpQEHZeqH7T1Jbf+shgI0Q4XRhGdXXBBFloo2IeGKa1FlNtQaTgBydeEEVXqi+pm1sAFTEBKf70vSIpcARbyAP20SCqZdimFDzPUPJBYw==',key_name='tempest-TestNetworkBasicOps-1945306424',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134d57b916be4f4ca80b3a59630701e5',ramdisk_id='',reservation_id='r-zkxvm2zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-197593556',owner_user_name='tempest-TestNetworkBasicOps-197593556-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:09:36Z,user_data=None,user_id='8d883f36e32b4c71b56683d7117547d8',uuid=e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.284 185478 DEBUG nova.network.os_vif_util [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Converting VIF {"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.284 185478 DEBUG nova.network.os_vif_util [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=39d7dd25-004e-46d1-b35c-19e1d39b90b7,network=Network(a4d9427d-0bca-46c0-aaca-aa38c0dca8a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39d7dd25-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.285 185478 DEBUG os_vif [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=39d7dd25-004e-46d1-b35c-19e1d39b90b7,network=Network(a4d9427d-0bca-46c0-aaca-aa38c0dca8a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39d7dd25-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.285 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.286 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.286 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.290 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.291 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39d7dd25-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.291 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39d7dd25-00, col_values=(('external_ids', {'iface-id': '39d7dd25-004e-46d1-b35c-19e1d39b90b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:1f:9a', 'vm-uuid': 'e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.294 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.295 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:09:43 compute-0 NetworkManager[56139]: <info>  [1767625783.2967] manager: (tap39d7dd25-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.304 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.305 185478 INFO os_vif [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:1f:9a,bridge_name='br-int',has_traffic_filtering=True,id=39d7dd25-004e-46d1-b35c-19e1d39b90b7,network=Network(a4d9427d-0bca-46c0-aaca-aa38c0dca8a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39d7dd25-00')
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.401 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.402 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.403 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] No VIF found with MAC fa:16:3e:d8:1f:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 15:09:43 compute-0 nova_compute[185474]: 2026-01-05 15:09:43.403 185478 INFO nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Using config drive
Jan 05 15:09:44 compute-0 nova_compute[185474]: 2026-01-05 15:09:44.342 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:44 compute-0 nova_compute[185474]: 2026-01-05 15:09:44.828 185478 INFO nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Creating config drive at /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.config
Jan 05 15:09:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:44.829 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:44.830 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:44.831 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:44 compute-0 nova_compute[185474]: 2026-01-05 15:09:44.839 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4dr8l4b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:44 compute-0 nova_compute[185474]: 2026-01-05 15:09:44.985 185478 DEBUG oslo_concurrency.processutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn4dr8l4b" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:45 compute-0 kernel: tap39d7dd25-00: entered promiscuous mode
Jan 05 15:09:45 compute-0 ovn_controller[97763]: 2026-01-05T15:09:45Z|00110|binding|INFO|Claiming lport 39d7dd25-004e-46d1-b35c-19e1d39b90b7 for this chassis.
Jan 05 15:09:45 compute-0 ovn_controller[97763]: 2026-01-05T15:09:45Z|00111|binding|INFO|39d7dd25-004e-46d1-b35c-19e1d39b90b7: Claiming fa:16:3e:d8:1f:9a 10.100.0.12
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.0816] manager: (tap39d7dd25-00): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.076 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.094 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.096 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:1f:9a 10.100.0.12'], port_security=['fa:16:3e:d8:1f:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134d57b916be4f4ca80b3a59630701e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4626b8f-c7be-4842-bc7a-e8902b844f18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f7c8690-1835-40ac-96c7-bc937a5ec23d, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=39d7dd25-004e-46d1-b35c-19e1d39b90b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.099 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 39d7dd25-004e-46d1-b35c-19e1d39b90b7 in datapath a4d9427d-0bca-46c0-aaca-aa38c0dca8a5 bound to our chassis
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.105 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a4d9427d-0bca-46c0-aaca-aa38c0dca8a5
Jan 05 15:09:45 compute-0 ovn_controller[97763]: 2026-01-05T15:09:45Z|00112|binding|INFO|Setting lport 39d7dd25-004e-46d1-b35c-19e1d39b90b7 ovn-installed in OVS
Jan 05 15:09:45 compute-0 ovn_controller[97763]: 2026-01-05T15:09:45Z|00113|binding|INFO|Setting lport 39d7dd25-004e-46d1-b35c-19e1d39b90b7 up in Southbound
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.109 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.114 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.126 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e830c1-067e-47d6-b4ac-926e60779d2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.128 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa4d9427d-01 in ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.131 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa4d9427d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.131 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[b271dff1-4db2-4863-9505-2eced31e3c57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.133 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[55e39f76-e4b5-48e2-8117-3de2d451bf8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 systemd-udevd[250879]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:45 compute-0 systemd-machined[156786]: New machine qemu-10-instance-0000000a.
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.154 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[26f50468-2d15-4d0e-90ad-3fea85c3221c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.1655] device (tap39d7dd25-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.1660] device (tap39d7dd25-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:09:45 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.191 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[3f622154-6249-4fd2-bcb0-a7176984aef6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.233 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[74a5a19a-b9c0-46c1-b1c5-07f917fd7d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.240 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ad691f-62e0-4484-a38c-885b11385f4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.2415] manager: (tapa4d9427d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 05 15:09:45 compute-0 systemd-udevd[250884]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.292 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[379bd19e-ec1a-4778-be39-14dfabafa1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.298 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf0212d-870a-49f3-a9f4-a0e6ed3b4b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.3299] device (tapa4d9427d-00): carrier: link connected
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.360 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1786b1-8954-4b3f-8810-a359dfd394ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.385 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[50b5bb5e-f0e4-449c-a69b-fe5b121c723d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4d9427d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:74:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511227, 'reachable_time': 40447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250912, 'error': None, 'target': 'ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.407 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd037fb-0929-49fd-a8a6-5830d8263a60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:749d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511227, 'tstamp': 511227}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250913, 'error': None, 'target': 'ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.431 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f2976a19-8bb1-4200-b647-0cb4980619ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa4d9427d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:74:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511227, 'reachable_time': 40447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250914, 'error': None, 'target': 'ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.471 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[d592628c-556d-49c7-b2c8-56f8e21f5e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.549 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8406f4-6d96-4d73-9583-f8b1e798376b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.551 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4d9427d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.551 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.552 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4d9427d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:45 compute-0 kernel: tapa4d9427d-00: entered promiscuous mode
Jan 05 15:09:45 compute-0 NetworkManager[56139]: <info>  [1767625785.5554] manager: (tapa4d9427d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.557 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.561 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa4d9427d-00, col_values=(('external_ids', {'iface-id': '4cc48a5f-b4b4-4326-a167-b706318b3e05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:09:45 compute-0 ovn_controller[97763]: 2026-01-05T15:09:45Z|00114|binding|INFO|Releasing lport 4cc48a5f-b4b4-4326-a167-b706318b3e05 from this chassis (sb_readonly=0)
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.563 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.564 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.567 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a4d9427d-0bca-46c0-aaca-aa38c0dca8a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a4d9427d-0bca-46c0-aaca-aa38c0dca8a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.568 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[176e35c2-858f-4fd5-b3fa-991a99436830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.569 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/a4d9427d-0bca-46c0-aaca-aa38c0dca8a5.pid.haproxy
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID a4d9427d-0bca-46c0-aaca-aa38c0dca8a5
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:09:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:09:45.570 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'env', 'PROCESS_TAG=haproxy-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a4d9427d-0bca-46c0-aaca-aa38c0dca8a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.582 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.876 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625785.875494, e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.878 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] VM Started (Lifecycle Event)
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.922 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.935 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625785.8770635, e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.936 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] VM Paused (Lifecycle Event)
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.969 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.974 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:45 compute-0 nova_compute[185474]: 2026-01-05 15:09:45.994 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:46 compute-0 podman[250950]: 2026-01-05 15:09:46.095778058 +0000 UTC m=+0.092718104 container create 09c6c17be29fb6049e9a5d6dfa9fba4960069a4d9b4b295d1cc18aeafe4e6ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:09:46 compute-0 podman[250950]: 2026-01-05 15:09:46.042737168 +0000 UTC m=+0.039677244 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:09:46 compute-0 systemd[1]: Started libpod-conmon-09c6c17be29fb6049e9a5d6dfa9fba4960069a4d9b4b295d1cc18aeafe4e6ae1.scope.
Jan 05 15:09:46 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17da2bb4d2daefae1d50a0df88277924137292a027bdb89f1143de38ea7bfc3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:09:46 compute-0 podman[250950]: 2026-01-05 15:09:46.214854576 +0000 UTC m=+0.211794652 container init 09c6c17be29fb6049e9a5d6dfa9fba4960069a4d9b4b295d1cc18aeafe4e6ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 15:09:46 compute-0 podman[250950]: 2026-01-05 15:09:46.222398888 +0000 UTC m=+0.219338934 container start 09c6c17be29fb6049e9a5d6dfa9fba4960069a4d9b4b295d1cc18aeafe4e6ae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 15:09:46 compute-0 neutron-haproxy-ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5[250966]: [NOTICE]   (250970) : New worker (250972) forked
Jan 05 15:09:46 compute-0 neutron-haproxy-ovnmeta-a4d9427d-0bca-46c0-aaca-aa38c0dca8a5[250966]: [NOTICE]   (250970) : Loading success.
Jan 05 15:09:46 compute-0 nova_compute[185474]: 2026-01-05 15:09:46.832 185478 DEBUG nova.network.neutron [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updated VIF entry in instance network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:46 compute-0 nova_compute[185474]: 2026-01-05 15:09:46.833 185478 DEBUG nova.network.neutron [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:46 compute-0 nova_compute[185474]: 2026-01-05 15:09:46.859 185478 DEBUG oslo_concurrency.lockutils [req-0d36542d-8c48-4d4e-a1f1-546610237627 req-dea917ea-7458-4b39-9610-2e8630182155 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:48 compute-0 nova_compute[185474]: 2026-01-05 15:09:48.295 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:48 compute-0 ovn_controller[97763]: 2026-01-05T15:09:48Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:a0:eb 10.100.0.8
Jan 05 15:09:48 compute-0 ovn_controller[97763]: 2026-01-05T15:09:48Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:a0:eb 10.100.0.8
Jan 05 15:09:49 compute-0 nova_compute[185474]: 2026-01-05 15:09:49.345 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:51 compute-0 ovn_controller[97763]: 2026-01-05T15:09:51Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:dc:0e 10.100.0.13
Jan 05 15:09:51 compute-0 ovn_controller[97763]: 2026-01-05T15:09:51Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:dc:0e 10.100.0.13
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.515 185478 DEBUG nova.compute.manager [req-3e846c7f-f150-4aac-8654-86c66c681689 req-eea3de82-75bd-4f64-a626-e9e9da62971f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received event network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.515 185478 DEBUG oslo_concurrency.lockutils [req-3e846c7f-f150-4aac-8654-86c66c681689 req-eea3de82-75bd-4f64-a626-e9e9da62971f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.516 185478 DEBUG oslo_concurrency.lockutils [req-3e846c7f-f150-4aac-8654-86c66c681689 req-eea3de82-75bd-4f64-a626-e9e9da62971f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.516 185478 DEBUG oslo_concurrency.lockutils [req-3e846c7f-f150-4aac-8654-86c66c681689 req-eea3de82-75bd-4f64-a626-e9e9da62971f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.517 185478 DEBUG nova.compute.manager [req-3e846c7f-f150-4aac-8654-86c66c681689 req-eea3de82-75bd-4f64-a626-e9e9da62971f 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Processing event network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.517 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.532 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625791.5319479, e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.532 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] VM Resumed (Lifecycle Event)
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.534 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.541 185478 INFO nova.virt.libvirt.driver [-] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Instance spawned successfully.
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.542 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.557 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.567 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.575 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.575 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.576 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.576 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.576 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.577 185478 DEBUG nova.virt.libvirt.driver [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.603 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 15:09:51 compute-0 podman[251016]: 2026-01-05 15:09:51.620575663 +0000 UTC m=+0.101703464 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:09:51 compute-0 podman[251015]: 2026-01-05 15:09:51.625014313 +0000 UTC m=+0.112717950 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.643 185478 INFO nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Took 15.31 seconds to spawn the instance on the hypervisor.
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.644 185478 DEBUG nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.730 185478 INFO nova.compute.manager [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Took 15.84 seconds to build instance.
Jan 05 15:09:51 compute-0 nova_compute[185474]: 2026-01-05 15:09:51.756 185478 DEBUG oslo_concurrency.lockutils [None req-da9e87d9-fd81-4e51-8804-ea08ddadaaa8 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.299 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.941 185478 DEBUG nova.compute.manager [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received event network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.941 185478 DEBUG oslo_concurrency.lockutils [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.942 185478 DEBUG oslo_concurrency.lockutils [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.942 185478 DEBUG oslo_concurrency.lockutils [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.942 185478 DEBUG nova.compute.manager [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] No waiting events found dispatching network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:09:53 compute-0 nova_compute[185474]: 2026-01-05 15:09:53.943 185478 WARNING nova.compute.manager [req-2231fd9f-24ad-48be-a71f-ade386cb5729 req-aec3a873-493c-401f-bc44-2c0fd8c63437 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received unexpected event network-vif-plugged-39d7dd25-004e-46d1-b35c-19e1d39b90b7 for instance with vm_state active and task_state None.
Jan 05 15:09:54 compute-0 nova_compute[185474]: 2026-01-05 15:09:54.347 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:54 compute-0 nova_compute[185474]: 2026-01-05 15:09:54.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:54 compute-0 nova_compute[185474]: 2026-01-05 15:09:54.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:09:54 compute-0 podman[251056]: 2026-01-05 15:09:54.612890578 +0000 UTC m=+0.106543804 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.433 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.434 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.434 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.434 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.557 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.642 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.644 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.711 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.731 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.823 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.825 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.888 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.899 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:55 compute-0 nova_compute[185474]: 2026-01-05 15:09:55.998 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.001 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.088 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.549 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.550 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4842MB free_disk=72.32154083251953GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.550 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.551 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.589 185478 DEBUG nova.compute.manager [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.589 185478 DEBUG nova.compute.manager [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing instance network info cache due to event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.590 185478 DEBUG oslo_concurrency.lockutils [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.590 185478 DEBUG oslo_concurrency.lockutils [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.591 185478 DEBUG nova.network.neutron [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.701 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.702 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 00943943-b19d-4862-8829-45a5cc14e988 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.702 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.702 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.703 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.816 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.839 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.857 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:09:56 compute-0 nova_compute[185474]: 2026-01-05 15:09:56.858 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:09:57 compute-0 nova_compute[185474]: 2026-01-05 15:09:57.859 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:58 compute-0 nova_compute[185474]: 2026-01-05 15:09:58.303 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:59 compute-0 nova_compute[185474]: 2026-01-05 15:09:59.043 185478 DEBUG nova.network.neutron [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updated VIF entry in instance network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:09:59 compute-0 nova_compute[185474]: 2026-01-05 15:09:59.044 185478 DEBUG nova.network.neutron [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:09:59 compute-0 nova_compute[185474]: 2026-01-05 15:09:59.076 185478 DEBUG oslo_concurrency.lockutils [req-8badc016-535e-48cd-aa85-1d83ce96c289 req-761db86a-b048-4f2a-85f8-780662773192 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:09:59 compute-0 nova_compute[185474]: 2026-01-05 15:09:59.351 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:09:59 compute-0 nova_compute[185474]: 2026-01-05 15:09:59.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:09:59 compute-0 podman[201880]: time="2026-01-05T15:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:09:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 05 15:09:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5300 "" "Go-http-client/1.1"
Jan 05 15:10:00 compute-0 nova_compute[185474]: 2026-01-05 15:10:00.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:00 compute-0 nova_compute[185474]: 2026-01-05 15:10:00.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:00 compute-0 nova_compute[185474]: 2026-01-05 15:10:00.397 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:10:00 compute-0 nova_compute[185474]: 2026-01-05 15:10:00.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:10:01 compute-0 nova_compute[185474]: 2026-01-05 15:10:01.204 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:01 compute-0 nova_compute[185474]: 2026-01-05 15:10:01.206 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:01 compute-0 nova_compute[185474]: 2026-01-05 15:10:01.207 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:10:01 compute-0 nova_compute[185474]: 2026-01-05 15:10:01.207 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:01 compute-0 openstack_network_exporter[205179]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:10:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:10:01 compute-0 openstack_network_exporter[205179]: ERROR   15:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:10:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:10:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:01.843 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:10:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:01.844 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:10:01 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:01.845 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:01 compute-0 nova_compute[185474]: 2026-01-05 15:10:01.849 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:03 compute-0 nova_compute[185474]: 2026-01-05 15:10:03.311 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:04 compute-0 nova_compute[185474]: 2026-01-05 15:10:04.088 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:10:04 compute-0 nova_compute[185474]: 2026-01-05 15:10:04.104 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:10:04 compute-0 nova_compute[185474]: 2026-01-05 15:10:04.105 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:10:04 compute-0 nova_compute[185474]: 2026-01-05 15:10:04.107 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:04 compute-0 nova_compute[185474]: 2026-01-05 15:10:04.356 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:04 compute-0 podman[251096]: 2026-01-05 15:10:04.653513619 +0000 UTC m=+0.123094018 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 15:10:08 compute-0 nova_compute[185474]: 2026-01-05 15:10:08.316 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:09 compute-0 nova_compute[185474]: 2026-01-05 15:10:09.359 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:09 compute-0 nova_compute[185474]: 2026-01-05 15:10:09.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:09 compute-0 nova_compute[185474]: 2026-01-05 15:10:09.446 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:10 compute-0 podman[251115]: 2026-01-05 15:10:10.648239327 +0000 UTC m=+0.128093160 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Jan 05 15:10:13 compute-0 nova_compute[185474]: 2026-01-05 15:10:13.319 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:13 compute-0 podman[251137]: 2026-01-05 15:10:13.615967344 +0000 UTC m=+0.090079303 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:10:13 compute-0 podman[251136]: 2026-01-05 15:10:13.642961246 +0000 UTC m=+0.121218577 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 15:10:13 compute-0 podman[251138]: 2026-01-05 15:10:13.686584074 +0000 UTC m=+0.150135181 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 15:10:14 compute-0 nova_compute[185474]: 2026-01-05 15:10:14.361 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:18 compute-0 nova_compute[185474]: 2026-01-05 15:10:18.324 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:19 compute-0 nova_compute[185474]: 2026-01-05 15:10:19.364 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:22 compute-0 podman[251200]: 2026-01-05 15:10:22.618658465 +0000 UTC m=+0.108267940 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 15:10:22 compute-0 podman[251201]: 2026-01-05 15:10:22.629163446 +0000 UTC m=+0.105736392 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:10:23 compute-0 nova_compute[185474]: 2026-01-05 15:10:23.327 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:23 compute-0 ovn_controller[97763]: 2026-01-05T15:10:23Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:1f:9a 10.100.0.12
Jan 05 15:10:23 compute-0 ovn_controller[97763]: 2026-01-05T15:10:23Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:1f:9a 10.100.0.12
Jan 05 15:10:24 compute-0 nova_compute[185474]: 2026-01-05 15:10:24.369 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:25 compute-0 podman[251251]: 2026-01-05 15:10:25.665989992 +0000 UTC m=+0.145390104 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 05 15:10:28 compute-0 nova_compute[185474]: 2026-01-05 15:10:28.330 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:29 compute-0 nova_compute[185474]: 2026-01-05 15:10:29.373 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:29 compute-0 podman[201880]: time="2026-01-05T15:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:10:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 05 15:10:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5310 "" "Go-http-client/1.1"
Jan 05 15:10:30 compute-0 nova_compute[185474]: 2026-01-05 15:10:30.931 185478 INFO nova.compute.manager [None req-faabd78b-dca3-412d-b274-3a581cb05975 8d883f36e32b4c71b56683d7117547d8 134d57b916be4f4ca80b3a59630701e5 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Get console output
Jan 05 15:10:31 compute-0 nova_compute[185474]: 2026-01-05 15:10:31.039 239631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 05 15:10:31 compute-0 openstack_network_exporter[205179]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:10:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:10:31 compute-0 openstack_network_exporter[205179]: ERROR   15:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:10:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:10:32 compute-0 nova_compute[185474]: 2026-01-05 15:10:32.932 185478 DEBUG nova.objects.instance [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lazy-loading 'flavor' on Instance uuid 00943943-b19d-4862-8829-45a5cc14e988 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:32 compute-0 nova_compute[185474]: 2026-01-05 15:10:32.985 185478 DEBUG oslo_concurrency.lockutils [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:32 compute-0 nova_compute[185474]: 2026-01-05 15:10:32.986 185478 DEBUG oslo_concurrency.lockutils [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:33 compute-0 nova_compute[185474]: 2026-01-05 15:10:33.333 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:34 compute-0 nova_compute[185474]: 2026-01-05 15:10:34.375 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:35 compute-0 podman[251284]: 2026-01-05 15:10:35.627778284 +0000 UTC m=+0.114104966 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 15:10:37 compute-0 nova_compute[185474]: 2026-01-05 15:10:37.744 185478 DEBUG nova.compute.manager [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Received event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:10:37 compute-0 nova_compute[185474]: 2026-01-05 15:10:37.744 185478 DEBUG nova.compute.manager [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing instance network info cache due to event network-changed-39d7dd25-004e-46d1-b35c-19e1d39b90b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:10:37 compute-0 nova_compute[185474]: 2026-01-05 15:10:37.745 185478 DEBUG oslo_concurrency.lockutils [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:37 compute-0 nova_compute[185474]: 2026-01-05 15:10:37.746 185478 DEBUG oslo_concurrency.lockutils [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:37 compute-0 nova_compute[185474]: 2026-01-05 15:10:37.747 185478 DEBUG nova.network.neutron [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Refreshing network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.757 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.758 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.758 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.759 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.761 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7daa750>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.765 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 15:10:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:37.767 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9f321f76-b34e-4ad0-b6c4-285f4470baa0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 15:10:38 compute-0 nova_compute[185474]: 2026-01-05 15:10:38.336 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.046 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.047 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.048 185478 INFO nova.compute.manager [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Rebooting instance
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.064 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.065 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquired lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.065 185478 DEBUG nova.network.neutron [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:10:39 compute-0 sshd-session[251305]: Invalid user solana from 165.22.168.95 port 35334
Jan 05 15:10:39 compute-0 sshd-session[251305]: Connection closed by invalid user solana 165.22.168.95 port 35334 [preauth]
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.379 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:39 compute-0 nova_compute[185474]: 2026-01-05 15:10:39.604 185478 DEBUG nova.network.neutron [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.836 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1981 Content-Type: application/json Date: Mon, 05 Jan 2026 15:10:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d3a7afe0-8353-41eb-9614-800615b82429 x-openstack-request-id: req-d3a7afe0-8353-41eb-9614-800615b82429 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.836 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9f321f76-b34e-4ad0-b6c4-285f4470baa0", "name": "tempest-ServerActionsTestJSON-server-864778593", "status": "HARD_REBOOT", "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "user_id": "b1c84f20ffdd429d9965ed731c086635", "metadata": {}, "hostId": "84dbec19e8f89e0a1922d29a4f31c5def3723dff804614cc3dfa65fc", "image": {"id": "e22fea2c-125b-4347-8d96-267cb6a6831b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e22fea2c-125b-4347-8d96-267cb6a6831b"}]}, "flavor": {"id": "3a2fb381-0342-40f9-8eb5-089f8c9475fd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3a2fb381-0342-40f9-8eb5-089f8c9475fd"}]}, "created": "2026-01-05T15:08:52Z", "updated": "2026-01-05T15:10:37Z", "addresses": {"tempest-ServerActionsTestJSON-1288657617-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:dc:0e"}, {"version": 4, "addr": "192.168.122.182", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:4d:dc:0e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9f321f76-b34e-4ad0-b6c4-285f4470baa0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9f321f76-b34e-4ad0-b6c4-285f4470baa0"}], "OS-DCF:diskConfig": "MANUAL", "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-763020533", "OS-SRV-USG:launched_at": "2026-01-05T15:09:16.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--102749030"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "rebooting_hard", "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.836 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9f321f76-b34e-4ad0-b6c4-285f4470baa0 used request id req-d3a7afe0-8353-41eb-9614-800615b82429 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.838 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9f321f76-b34e-4ad0-b6c4-285f4470baa0', 'name': 'tempest-ServerActionsTestJSON-server-864778593', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'user_id': 'b1c84f20ffdd429d9965ed731c086635', 'hostId': '84dbec19e8f89e0a1922d29a4f31c5def3723dff804614cc3dfa65fc', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.843 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 15:10:39 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:39.844 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 15:10:40 compute-0 nova_compute[185474]: 2026-01-05 15:10:40.557 185478 DEBUG nova.compute.manager [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:10:40 compute-0 nova_compute[185474]: 2026-01-05 15:10:40.558 185478 DEBUG nova.compute.manager [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing instance network info cache due to event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:10:40 compute-0 nova_compute[185474]: 2026-01-05 15:10:40.559 185478 DEBUG oslo_concurrency.lockutils [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:41 compute-0 podman[251307]: 2026-01-05 15:10:41.680523284 +0000 UTC m=+0.163341946 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Jan 05 15:10:41 compute-0 nova_compute[185474]: 2026-01-05 15:10:41.911 185478 DEBUG nova.network.neutron [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updated VIF entry in instance network info cache for port 39d7dd25-004e-46d1-b35c-19e1d39b90b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:10:41 compute-0 nova_compute[185474]: 2026-01-05 15:10:41.913 185478 DEBUG nova.network.neutron [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.331 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1976 Content-Type: application/json Date: Mon, 05 Jan 2026 15:10:39 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6da2435e-da91-4148-88b1-29a0e2ea4b6d x-openstack-request-id: req-6da2435e-da91-4148-88b1-29a0e2ea4b6d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.331 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8", "name": "tempest-TestNetworkBasicOps-server-141186871", "status": "ACTIVE", "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "user_id": "8d883f36e32b4c71b56683d7117547d8", "metadata": {}, "hostId": "dd91e800a8ccaf570defe3489ea6eac358fb3fd9b78a6f5299436f84", "image": {"id": "e22fea2c-125b-4347-8d96-267cb6a6831b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e22fea2c-125b-4347-8d96-267cb6a6831b"}]}, "flavor": {"id": "3a2fb381-0342-40f9-8eb5-089f8c9475fd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3a2fb381-0342-40f9-8eb5-089f8c9475fd"}]}, "created": "2026-01-05T15:09:34Z", "updated": "2026-01-05T15:09:51Z", "addresses": {"tempest-network-smoke--1910768748": [{"version": 4, "addr": "10.100.0.12", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:1f:9a"}, {"version": 4, "addr": "192.168.122.234", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:1f:9a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1945306424", "OS-SRV-USG:launched_at": "2026-01-05T15:09:51.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1214727255"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.331 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 used request id req-6da2435e-da91-4148-88b1-29a0e2ea4b6d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.334 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8', 'name': 'tempest-TestNetworkBasicOps-server-141186871', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '134d57b916be4f4ca80b3a59630701e5', 'user_id': '8d883f36e32b4c71b56683d7117547d8', 'hostId': 'dd91e800a8ccaf570defe3489ea6eac358fb3fd9b78a6f5299436f84', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.339 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 00943943-b19d-4862-8829-45a5cc14e988 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 15:10:42 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:42.340 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/00943943-b19d-4862-8829-45a5cc14e988 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}3a9a6b0d955f091f392374a695f163a2995629ca5c315b3823e8a6b9c12e4c9b" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 15:10:42 compute-0 nova_compute[185474]: 2026-01-05 15:10:42.653 185478 DEBUG oslo_concurrency.lockutils [req-42295879-e6c5-44b8-8d0c-f00cf37b4631 req-60835332-1706-4c0c-b2b3-2a4dc7c8f672 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:10:42 compute-0 nova_compute[185474]: 2026-01-05 15:10:42.895 185478 DEBUG nova.network.neutron [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.050 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Releasing lock "refresh_cache-9f321f76-b34e-4ad0-b6c4-285f4470baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.053 185478 DEBUG nova.compute.manager [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.264 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.265 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.295 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.339 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 kernel: tap5d68d02c-72 (unregistering): left promiscuous mode
Jan 05 15:10:43 compute-0 NetworkManager[56139]: <info>  [1767625843.4124] device (tap5d68d02c-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.425 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 ovn_controller[97763]: 2026-01-05T15:10:43Z|00115|binding|INFO|Releasing lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be from this chassis (sb_readonly=0)
Jan 05 15:10:43 compute-0 ovn_controller[97763]: 2026-01-05T15:10:43Z|00116|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be down in Southbound
Jan 05 15:10:43 compute-0 ovn_controller[97763]: 2026-01-05T15:10:43Z|00117|binding|INFO|Removing iface tap5d68d02c-72 ovn-installed in OVS
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.433 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.449 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.448 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:dc:0e 10.100.0.13'], port_security=['fa:16:3e:4d:dc:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f321f76-b34e-4ad0-b6c4-285f4470baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7313966f-87a0-413c-b336-702cd552f4fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '347728ff-d8cb-45fb-b3a1-665f18a6be0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7084d359-9113-48e1-9593-68ec04f6720b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=5d68d02c-7204-4217-adec-1d5b6f2fc0be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.450 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 5d68d02c-7204-4217-adec-1d5b6f2fc0be in datapath 7313966f-87a0-413c-b336-702cd552f4fe unbound from our chassis
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.453 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7313966f-87a0-413c-b336-702cd552f4fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.455 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[9f99a075-4484-48fb-b1b1-c7c22ace29ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.456 107222 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe namespace which is not needed anymore
Jan 05 15:10:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 05 15:10:43 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 43.300s CPU time.
Jan 05 15:10:43 compute-0 systemd-machined[156786]: Machine qemu-6-instance-00000006 terminated.
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.505 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1996 Content-Type: application/json Date: Mon, 05 Jan 2026 15:10:42 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c1b6f420-b270-4f17-b128-9ef854dcd9cf x-openstack-request-id: req-c1b6f420-b270-4f17-b128-9ef854dcd9cf _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.505 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "00943943-b19d-4862-8829-45a5cc14e988", "name": "tempest-AttachInterfacesUnderV243Test-server-2119923937", "status": "ACTIVE", "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "user_id": "f2d114b57ba04fe69b1c1c673fb3da52", "metadata": {}, "hostId": "e1b5aea2779c08b8229a0ef33c93fbf2dcc56b160d07dca2bcd12122", "image": {"id": "e22fea2c-125b-4347-8d96-267cb6a6831b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e22fea2c-125b-4347-8d96-267cb6a6831b"}]}, "flavor": {"id": "3a2fb381-0342-40f9-8eb5-089f8c9475fd", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/3a2fb381-0342-40f9-8eb5-089f8c9475fd"}]}, "created": "2026-01-05T15:09:07Z", "updated": "2026-01-05T15:09:15Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1370621257-network": [{"version": 4, "addr": "10.100.0.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cb:a0:eb"}, {"version": 4, "addr": "192.168.122.241", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cb:a0:eb"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/00943943-b19d-4862-8829-45a5cc14e988"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/00943943-b19d-4862-8829-45a5cc14e988"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-349641192", "OS-SRV-USG:launched_at": "2026-01-05T15:09:15.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1693160558"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.506 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/00943943-b19d-4862-8829-45a5cc14e988 used request id req-c1b6f420-b270-4f17-b128-9ef854dcd9cf request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.508 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '00943943-b19d-4862-8829-45a5cc14e988', 'name': 'tempest-AttachInterfacesUnderV243Test-server-2119923937', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '47a5a3a457584254b36f5f2118cf6568', 'user_id': 'f2d114b57ba04fe69b1c1c673fb3da52', 'hostId': 'e1b5aea2779c08b8229a0ef33c93fbf2dcc56b160d07dca2bcd12122', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.508 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T15:10:43.508836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [NOTICE]   (250046) : haproxy version is 2.8.14-c23fe91
Jan 05 15:10:43 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [NOTICE]   (250046) : path to executable is /usr/sbin/haproxy
Jan 05 15:10:43 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [WARNING]  (250046) : Exiting Master process...
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.638 14 DEBUG ceilometer.compute.pollsters [-] Instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0 was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>, domain state is SHUTOFF. get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:151
Jan 05 15:10:43 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [ALERT]    (250046) : Current worker (250048) exited with code 143 (Terminated)
Jan 05 15:10:43 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[250042]: [WARNING]  (250046) : All workers exited. Exiting... (0)
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.647 185478 INFO nova.virt.libvirt.driver [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance destroyed successfully.
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.648 185478 DEBUG nova.objects.instance [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'resources' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:43 compute-0 podman[251351]: 2026-01-05 15:10:43.65342933 +0000 UTC m=+0.067174399 container died 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 15:10:43 compute-0 systemd[1]: libpod-63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f.scope: Deactivated successfully.
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.667 185478 DEBUG nova.virt.libvirt.vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T15:10:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.668 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.669 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.669 185478 DEBUG os_vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.671 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.671 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d68d02c-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.674 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.675 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.679 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.683 185478 INFO os_vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72')
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.692 185478 DEBUG nova.virt.libvirt.driver [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start _get_guest_xml network_info=[{"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'size': 0, 'encryption_options': None, 'device_type': 'disk', 'image_id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.698 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 2075283930 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.699 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.701 185478 WARNING nova.virt.libvirt.driver [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ad3981894a8c62cfb7f01da37b4fd97ed67942e64740be9c14e9a239b4e893-merged.mount: Deactivated successfully.
Jan 05 15:10:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f-userdata-shm.mount: Deactivated successfully.
Jan 05 15:10:43 compute-0 podman[251351]: 2026-01-05 15:10:43.72772374 +0000 UTC m=+0.141468799 container cleanup 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:10:43 compute-0 systemd[1]: libpod-conmon-63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f.scope: Deactivated successfully.
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.749 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 4105189292 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T15:10:43.750985) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.752 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.752 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 647796318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.752 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 52531640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 548886735 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 56692568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T15:10:43.753885) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.754 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 1114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.755 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T15:10:43.756156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.757 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.770 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.771 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.784 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.786 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T15:10:43.786024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.787 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 podman[251381]: 2026-01-05 15:10:43.788036134 +0000 UTC m=+0.117067005 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.790 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 / tap39d7dd25-00 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.790 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 podman[251383]: 2026-01-05 15:10:43.793003548 +0000 UTC m=+0.113267945 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.795 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 00943943-b19d-4862-8829-45a5cc14e988 / tapa5cac4ea-b0 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.798 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.798 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 72957952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.798 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.798 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 73007104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.801 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.801 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.801 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T15:10:43.797036) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T15:10:43.799720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T15:10:43.800640) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T15:10:43.803247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.804 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.804 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 30820864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.804 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.804 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T15:10:43.805697) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.806 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 podman[251433]: 2026-01-05 15:10:43.817426962 +0000 UTC m=+0.060480781 container remove 63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.825 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[1589533d-f125-4f62-8f83-a8ee1a286940]: (4, ('Mon Jan  5 03:10:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe (63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f)\n63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f\nMon Jan  5 03:10:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe (63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f)\n63ad583723e6a506e58575c590f94ce4a85d8c90bdc3299293003b4fa36e862f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.827 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd004a6-27a3-46fa-9c44-02f2c442881b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.828 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/cpu volume: 32560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.832 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7313966f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:43 compute-0 kernel: tap7313966f-80: left promiscuous mode
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.835 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 podman[251412]: 2026-01-05 15:10:43.836519682 +0000 UTC m=+0.106902793 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 15:10:43 compute-0 nova_compute[185474]: 2026-01-05 15:10:43.849 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.853 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[91164dc5-87a7-441f-b759-26dd6d988107]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.861 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/cpu volume: 33890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.862 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T15:10:43.863179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.864 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.864 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.864 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T15:10:43.866081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.867 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.867 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.867 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T15:10:43.868831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.869 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.869 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-864778593>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-141186871>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-2119923937>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-864778593>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-141186871>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-2119923937>]
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.870 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T15:10:43.870713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.871 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.871 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[23c59185-95b5-41d6-b16f-b7440936b563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T15:10:43.872351) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.872 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[bf6b5917-f0b3-4194-9f25-9f1b13127c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.872 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-864778593>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-141186871>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-2119923937>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-864778593>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-141186871>, <NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-2119923937>]
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T15:10:43.873947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.875 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.875 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.875 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.876 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T15:10:43.876969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.878 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.878 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.878 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.879 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T15:10:43.880605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.881 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.881 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.882 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.883 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T15:10:43.883660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.884 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.884 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.885 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.886 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T15:10:43.886506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.887 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.887 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes volume: 18782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T15:10:43.888684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.893 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.893 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.893 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f4172990-cf1c-4415-bff1-f7bb9d8caade]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507122, 'reachable_time': 39998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251476, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T15:10:43.894982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.896 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.896 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.896 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.897 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T15:10:43.897728) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d7313966f\x2d87a0\x2d413c\x2db336\x2d702cd552f4fe.mount: Deactivated successfully.
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.899 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.899 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/memory.usage volume: 46.57421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.899 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/memory.usage volume: 42.90234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.898 107613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 15:10:43 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:43.898 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[571f3ea2-a403-4693-b229-0dd72cbef231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.899 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.900 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T15:10:43.900416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.901 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.901 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.901 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.902 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.902 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T15:10:43.903662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.904 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.904 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes volume: 23129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.905 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.905 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T15:10:43.906124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: libvirt: QEMU Driver error : Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0'
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.907 14 DEBUG ceilometer.compute.pollsters [-] Exception while getting samples Error from libvirt while looking up instance <name=instance-00000006, id=9f321f76-b34e-4ad0-b6c4-285f4470baa0>: [Error Code 42] Domain not found: no domain with matching uuid '9f321f76-b34e-4ad0-b6c4-285f4470baa0' get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:149
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.907 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.907 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.907 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.908 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.909 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.910 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:43 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:10:43.911 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.032 185478 DEBUG nova.virt.libvirt.host [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.033 185478 DEBUG nova.virt.libvirt.host [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.039 185478 DEBUG nova.virt.libvirt.host [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.040 185478 DEBUG nova.virt.libvirt.host [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.040 185478 DEBUG nova.virt.libvirt.driver [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.040 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T15:08:02Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='3a2fb381-0342-40f9-8eb5-089f8c9475fd',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e22fea2c-125b-4347-8d96-267cb6a6831b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.041 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.041 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.041 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.041 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.041 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.042 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.042 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.042 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.044 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.044 185478 DEBUG nova.virt.hardware [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.044 185478 DEBUG nova.objects.instance [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.070 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.159 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.160 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.160 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.165 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.166 185478 DEBUG nova.virt.libvirt.vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T15:10:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.167 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.168 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.169 185478 DEBUG nova.objects.instance [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.184 185478 DEBUG nova.virt.libvirt.driver [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] End _get_guest_xml xml=<domain type="kvm">
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <uuid>9f321f76-b34e-4ad0-b6c4-285f4470baa0</uuid>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <name>instance-00000006</name>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <memory>131072</memory>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <vcpu>1</vcpu>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <metadata>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:name>tempest-ServerActionsTestJSON-server-864778593</nova:name>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:creationTime>2026-01-05 15:10:43</nova:creationTime>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:flavor name="m1.nano">
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:memory>128</nova:memory>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:disk>1</nova:disk>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:swap>0</nova:swap>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:vcpus>1</nova:vcpus>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       </nova:flavor>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:owner>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:user uuid="b1c84f20ffdd429d9965ed731c086635">tempest-ServerActionsTestJSON-292757575-project-member</nova:user>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:project uuid="23dc0aab10ca466cb1b268ba1c456ac1">tempest-ServerActionsTestJSON-292757575</nova:project>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       </nova:owner>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:root type="image" uuid="e22fea2c-125b-4347-8d96-267cb6a6831b"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <nova:ports>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         <nova:port uuid="5d68d02c-7204-4217-adec-1d5b6f2fc0be">
Jan 05 15:10:44 compute-0 nova_compute[185474]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:         </nova:port>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       </nova:ports>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </nova:instance>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </metadata>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <sysinfo type="smbios">
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <system>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="manufacturer">RDO</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="product">OpenStack Compute</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="serial">9f321f76-b34e-4ad0-b6c4-285f4470baa0</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="uuid">9f321f76-b34e-4ad0-b6c4-285f4470baa0</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <entry name="family">Virtual Machine</entry>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </system>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </sysinfo>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <os>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <boot dev="hd"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <smbios mode="sysinfo"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </os>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <features>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <acpi/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <apic/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <vmcoreinfo/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </features>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <clock offset="utc">
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <timer name="hpet" present="no"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </clock>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <cpu mode="host-model" match="exact">
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </cpu>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   <devices>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <disk type="file" device="disk">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <target dev="vda" bus="virtio"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <disk type="file" device="cdrom">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <source file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk.config"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <target dev="sda" bus="sata"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </disk>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <interface type="ethernet">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <mac address="fa:16:3e:4d:dc:0e"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <mtu size="1442"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <target dev="tap5d68d02c-72"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </interface>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <serial type="pty">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <log file="/var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/console.log" append="off"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </serial>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <video>
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <model type="virtio"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </video>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <input type="tablet" bus="usb"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <input type="keyboard" bus="usb"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <rng model="virtio">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <backend model="random">/dev/urandom</backend>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </rng>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <controller type="usb" index="0"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     <memballoon model="virtio">
Jan 05 15:10:44 compute-0 nova_compute[185474]:       <stats period="10"/>
Jan 05 15:10:44 compute-0 nova_compute[185474]:     </memballoon>
Jan 05 15:10:44 compute-0 nova_compute[185474]:   </devices>
Jan 05 15:10:44 compute-0 nova_compute[185474]: </domain>
Jan 05 15:10:44 compute-0 nova_compute[185474]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.185 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.272 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.273 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.369 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.370 185478 DEBUG nova.objects.instance [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.383 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.387 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.472 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/38b8ac6fc49be41905fc77dbe18ef48c096d20d7 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.473 185478 DEBUG nova.virt.disk.api [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Checking if we can resize image /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.474 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.569 185478 DEBUG oslo_concurrency.processutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.570 185478 DEBUG nova.virt.disk.api [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Cannot resize image /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.570 185478 DEBUG nova.objects.instance [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.591 185478 DEBUG nova.virt.libvirt.vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T15:10:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.592 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.592 185478 DEBUG nova.network.os_vif_util [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.593 185478 DEBUG os_vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.593 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.594 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.594 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.598 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.599 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d68d02c-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.600 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d68d02c-72, col_values=(('external_ids', {'iface-id': '5d68d02c-7204-4217-adec-1d5b6f2fc0be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:dc:0e', 'vm-uuid': '9f321f76-b34e-4ad0-b6c4-285f4470baa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.602 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.6051] manager: (tap5d68d02c-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.605 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.609 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.611 185478 INFO os_vif [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72')
Jan 05 15:10:44 compute-0 kernel: tap5d68d02c-72: entered promiscuous mode
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.7151] manager: (tap5d68d02c-72): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Jan 05 15:10:44 compute-0 systemd-udevd[251329]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 15:10:44 compute-0 ovn_controller[97763]: 2026-01-05T15:10:44Z|00118|binding|INFO|Claiming lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be for this chassis.
Jan 05 15:10:44 compute-0 ovn_controller[97763]: 2026-01-05T15:10:44Z|00119|binding|INFO|5d68d02c-7204-4217-adec-1d5b6f2fc0be: Claiming fa:16:3e:4d:dc:0e 10.100.0.13
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.717 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.7345] device (tap5d68d02c-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.7412] device (tap5d68d02c-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 15:10:44 compute-0 ovn_controller[97763]: 2026-01-05T15:10:44Z|00120|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be ovn-installed in OVS
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.744 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 nova_compute[185474]: 2026-01-05 15:10:44.746 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.746 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:dc:0e 10.100.0.13'], port_security=['fa:16:3e:4d:dc:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f321f76-b34e-4ad0-b6c4-285f4470baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7313966f-87a0-413c-b336-702cd552f4fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '347728ff-d8cb-45fb-b3a1-665f18a6be0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7084d359-9113-48e1-9593-68ec04f6720b, chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=5d68d02c-7204-4217-adec-1d5b6f2fc0be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.747 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 5d68d02c-7204-4217-adec-1d5b6f2fc0be in datapath 7313966f-87a0-413c-b336-702cd552f4fe bound to our chassis
Jan 05 15:10:44 compute-0 ovn_controller[97763]: 2026-01-05T15:10:44Z|00121|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be up in Southbound
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.749 107222 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:10:44 compute-0 systemd-machined[156786]: New machine qemu-11-instance-00000006.
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.767 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[f5799a7e-49d3-40fd-abe4-cf591cf0f4d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.768 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7313966f-81 in ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.771 239805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7313966f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.771 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a5723ad0-d937-4fa1-aae8-a80203ec781a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.772 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[de6d07c6-55fb-4313-83c8-be1cc9307fea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000006.
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.784 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[5f90eded-05fc-4ac2-b041-1a1a8fe3ad1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.811 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb6062f-cab6-4728-be02-db7517a7bd5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.830 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.831 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.832 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.848 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[69fd5140-6e4f-47f1-8fe3-44fd150880cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.8584] manager: (tap7313966f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.857 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c899cb6e-b53b-4767-b1a2-dbd5c533ab8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.894 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[a527b202-385f-4c9e-a15a-37e8761a0e9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.900 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[b6df51ab-f7b0-42b7-a2b8-2bdcb772abfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 NetworkManager[56139]: <info>  [1767625844.9329] device (tap7313966f-80): carrier: link connected
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.940 239851 DEBUG oslo.privsep.daemon [-] privsep: reply[073c7917-fcc7-4cae-b976-c9a64ba52c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.960 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[52dbaefe-075b-4bd4-ab69-e10bfce24b0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7313966f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:df:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517187, 'reachable_time': 30559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251538, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.977 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[509e2a34-55b2-4057-966a-426dd0b98196]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:df96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517187, 'tstamp': 517187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251539, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:44.998 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[4268a861-340a-4b01-8291-83be0f71d05e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7313966f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:df:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517187, 'reachable_time': 30559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251540, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.038 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3bb16d-e507-41ac-870d-86ae55992179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.095 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a106a471-e6cb-4bf7-87ad-782a273e0607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.098 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7313966f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.098 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.099 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7313966f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:45 compute-0 kernel: tap7313966f-80: entered promiscuous mode
Jan 05 15:10:45 compute-0 NetworkManager[56139]: <info>  [1767625845.1031] manager: (tap7313966f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.102 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.110 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7313966f-80, col_values=(('external_ids', {'iface-id': '707d34b3-bc8b-4c2e-8e88-017cd6da92d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.112 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:45 compute-0 ovn_controller[97763]: 2026-01-05T15:10:45Z|00122|binding|INFO|Releasing lport 707d34b3-bc8b-4c2e-8e88-017cd6da92d0 from this chassis (sb_readonly=0)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.113 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.113 107222 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.114 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[03392099-6843-41a1-864f-0899d0c22a01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.116 107222 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: global
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     log         /dev/log local0 debug
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     log-tag     haproxy-metadata-proxy-7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     user        root
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     group       root
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     maxconn     1024
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     pidfile     /var/lib/neutron/external/pids/7313966f-87a0-413c-b336-702cd552f4fe.pid.haproxy
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     daemon
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: defaults
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     log global
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     mode http
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     option httplog
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     option dontlognull
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     option http-server-close
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     option forwardfor
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     retries                 3
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     timeout http-request    30s
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     timeout connect         30s
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     timeout client          32s
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     timeout server          32s
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     timeout http-keep-alive 30s
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: listen listener
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     bind 169.254.169.254:80
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:     http-request add-header X-OVN-Network-ID 7313966f-87a0-413c-b336-702cd552f4fe
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 15:10:45 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:45.119 107222 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'env', 'PROCESS_TAG=haproxy-7313966f-87a0-413c-b336-702cd552f4fe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7313966f-87a0-413c-b336-702cd552f4fe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.133 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.278 185478 DEBUG nova.virt.libvirt.host [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Removed pending event for 9f321f76-b34e-4ad0-b6c4-285f4470baa0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.279 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625845.277733, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.279 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Resumed (Lifecycle Event)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.292 185478 DEBUG nova.compute.manager [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.305 185478 INFO nova.virt.libvirt.driver [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance rebooted successfully.
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.305 185478 DEBUG nova.compute.manager [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.404 185478 DEBUG nova.network.neutron [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:10:45 compute-0 podman[251576]: 2026-01-05 15:10:45.532676181 +0000 UTC m=+0.030644902 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 15:10:45 compute-0 podman[251576]: 2026-01-05 15:10:45.738698777 +0000 UTC m=+0.236667498 container create c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 05 15:10:45 compute-0 systemd[1]: Started libpod-conmon-c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523.scope.
Jan 05 15:10:45 compute-0 systemd[1]: Started libcrun container.
Jan 05 15:10:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9cfc77a2933414c35ebb9c9b2a64d824c6aa75fdc706b9e7c4f601fdc5527ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.876 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.883 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:10:45 compute-0 podman[251576]: 2026-01-05 15:10:45.889944846 +0000 UTC m=+0.387913637 container init c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 05 15:10:45 compute-0 podman[251576]: 2026-01-05 15:10:45.907441366 +0000 UTC m=+0.405410067 container start c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.910 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.910 185478 DEBUG nova.virt.driver [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] Emitting event <LifecycleEvent: 1767625845.286883, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.911 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Started (Lifecycle Event)
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.945 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.955 185478 DEBUG nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 15:10:45 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [NOTICE]   (251595) : New worker (251597) forked
Jan 05 15:10:45 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [NOTICE]   (251595) : Loading success.
Jan 05 15:10:45 compute-0 nova_compute[185474]: 2026-01-05 15:10:45.981 185478 INFO nova.compute.manager [None req-1b4a7ccf-b503-4686-af47-ba260d9dbe2b - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.159 185478 DEBUG oslo_concurrency.lockutils [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.160 185478 DEBUG nova.compute.manager [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.168 185478 DEBUG nova.compute.manager [None req-b229f9e3-3041-4a0e-9880-14443c2b256e f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] network_info to inject: |[{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.172 185478 DEBUG oslo_concurrency.lockutils [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.172 185478 DEBUG nova.network.neutron [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:10:46 compute-0 nova_compute[185474]: 2026-01-05 15:10:46.252 185478 DEBUG oslo_concurrency.lockutils [None req-742f3c6c-8d2c-4945-827e-108d991f91da b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:10:47 compute-0 nova_compute[185474]: 2026-01-05 15:10:47.617 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:47 compute-0 nova_compute[185474]: 2026-01-05 15:10:47.642 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:49 compute-0 nova_compute[185474]: 2026-01-05 15:10:49.385 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:49 compute-0 nova_compute[185474]: 2026-01-05 15:10:49.602 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:49 compute-0 nova_compute[185474]: 2026-01-05 15:10:49.939 185478 DEBUG nova.objects.instance [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Lazy-loading 'flavor' on Instance uuid 00943943-b19d-4862-8829-45a5cc14e988 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:10:49 compute-0 nova_compute[185474]: 2026-01-05 15:10:49.965 185478 DEBUG oslo_concurrency.lockutils [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:10:52 compute-0 nova_compute[185474]: 2026-01-05 15:10:52.853 185478 DEBUG nova.network.neutron [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updated VIF entry in instance network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:10:52 compute-0 nova_compute[185474]: 2026-01-05 15:10:52.855 185478 DEBUG nova.network.neutron [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:10:53 compute-0 nova_compute[185474]: 2026-01-05 15:10:53.171 185478 DEBUG oslo_concurrency.lockutils [req-96c49ad0-9eb1-4e9a-b41a-41db8d768890 req-6ecce257-402c-4eaa-9393-acb3b24d3eaa 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:10:53 compute-0 nova_compute[185474]: 2026-01-05 15:10:53.173 185478 DEBUG oslo_concurrency.lockutils [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:10:53 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:10:53.269 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:10:53 compute-0 podman[251612]: 2026-01-05 15:10:53.600719186 +0000 UTC m=+0.079456188 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:10:53 compute-0 podman[251611]: 2026-01-05 15:10:53.615892272 +0000 UTC m=+0.092853317 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 15:10:54 compute-0 nova_compute[185474]: 2026-01-05 15:10:54.389 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:54 compute-0 nova_compute[185474]: 2026-01-05 15:10:54.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:54 compute-0 nova_compute[185474]: 2026-01-05 15:10:54.604 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:55 compute-0 nova_compute[185474]: 2026-01-05 15:10:55.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:55 compute-0 nova_compute[185474]: 2026-01-05 15:10:55.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:10:55 compute-0 nova_compute[185474]: 2026-01-05 15:10:55.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:56 compute-0 podman[251652]: 2026-01-05 15:10:56.624757779 +0000 UTC m=+0.110820558 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, config_id=kepler, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:10:56 compute-0 nova_compute[185474]: [SQL: SELECT 1]
Jan 05 15:10:56 compute-0 nova_compute[185474]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:10:56 compute-0 nova_compute[185474]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     task(self, context)
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task     raise result
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task [SQL: SELECT 1]
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:10:56 compute-0 nova_compute[185474]: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task 
Jan 05 15:10:56 compute-0 rsyslogd[237079]: message too long (14444) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:10:56 compute-0 rsyslogd[237079]: message too long (14508) with configured size 8096, begin of message is: 2026-01-05 15:10:56.748 185478 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:10:59 compute-0 nova_compute[185474]: 2026-01-05 15:10:59.392 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:59 compute-0 nova_compute[185474]: 2026-01-05 15:10:59.607 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:10:59 compute-0 podman[201880]: time="2026-01-05T15:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:10:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 05 15:10:59 compute-0 nova_compute[185474]: 2026-01-05 15:10:59.760 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:10:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5307 "" "Go-http-client/1.1"
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.398 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:00 compute-0 nova_compute[185474]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:00 compute-0 nova_compute[185474]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     task(self, context)
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9891, in _heal_instance_info_cache
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     inst = objects.Instance.get_by_uuid(
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task     raise result
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 525, in get_by_uuid\n    db_inst = cls._db_instance_get_by_uuid(context, uuid, columns_to_join,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 517, in _db_instance_get_by_uuid\n    return db.instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1395, in instance_get_by_uuid\n    return _instance_get_by_uuid(context, uuid,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1400, in _instance_get_by_uuid\n    result = _build_instance_get(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task 
Jan 05 15:11:00 compute-0 nova_compute[185474]: 2026-01-05 15:11:00.736 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:01 compute-0 rsyslogd[237079]: message too long (8833) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:01 compute-0 rsyslogd[237079]: message too long (8897) with configured size 8096, begin of message is: 2026-01-05 15:11:00.735 185478 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:01 compute-0 openstack_network_exporter[205179]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:11:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:11:01 compute-0 openstack_network_exporter[205179]: ERROR   15:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:11:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:11:02 compute-0 nova_compute[185474]: 2026-01-05 15:11:02.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:04 compute-0 nova_compute[185474]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:04 compute-0 nova_compute[185474]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db     raise result
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db 
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.395 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:04 compute-0 rsyslogd[237079]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:04 compute-0 rsyslogd[237079]: message too long (9052) with configured size 8096, begin of message is: 2026-01-05 15:11:04.149 185478 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:04 compute-0 nova_compute[185474]: 2026-01-05 15:11:04.609 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:06 compute-0 podman[251670]: 2026-01-05 15:11:06.635784791 +0000 UTC m=+0.111310341 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.400 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Error during ComputeManager._run_pending_deletes: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:09 compute-0 nova_compute[185474]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:09 compute-0 nova_compute[185474]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     task(self, context)
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11152, in _run_pending_deletes
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     instances = objects.InstanceList.get_by_filters(
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task     raise result
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1357, in get_by_filters\n    db_inst_list = cls._get_by_filters_impl(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1347, in _get_by_filters_impl\n    db_inst_list = db.instance_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1583, in instance_get_all_by_filters\n    return instance_get_all_by_filters_sort(context, filters, limit=limit,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/utils.py", line 35, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 1842, in instance_get_all_by_filters_sort\n    instances = query_prefix.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task 
Jan 05 15:11:09 compute-0 nova_compute[185474]: 2026-01-05 15:11:09.612 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:09 compute-0 rsyslogd[237079]: message too long (9083) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:09 compute-0 rsyslogd[237079]: message too long (9147) with configured size 8096, begin of message is: 2026-01-05 15:11:09.502 185478 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 15:11:11 compute-0 nova_compute[185474]: 2026-01-05 15:11:11.503 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:12 compute-0 podman[251689]: 2026-01-05 15:11:12.628806202 +0000 UTC m=+0.112066872 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Jan 05 15:11:13 compute-0 nova_compute[185474]: 2026-01-05 15:11:13.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:14 compute-0 nova_compute[185474]: 2026-01-05 15:11:14.146 185478 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 05 15:11:14 compute-0 nova_compute[185474]: 2026-01-05 15:11:14.385 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:14 compute-0 nova_compute[185474]: 2026-01-05 15:11:14.400 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:14 compute-0 podman[251709]: 2026-01-05 15:11:14.596353196 +0000 UTC m=+0.079434859 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:11:14 compute-0 nova_compute[185474]: 2026-01-05 15:11:14.615 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:14 compute-0 podman[251710]: 2026-01-05 15:11:14.636126571 +0000 UTC m=+0.113826260 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 15:11:14 compute-0 podman[251711]: 2026-01-05 15:11:14.684014943 +0000 UTC m=+0.163896620 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.685 185478 DEBUG nova.compute.manager [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.687 185478 DEBUG oslo_concurrency.lockutils [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.688 185478 DEBUG oslo_concurrency.lockutils [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.689 185478 DEBUG oslo_concurrency.lockutils [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.690 185478 DEBUG nova.compute.manager [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:15 compute-0 nova_compute[185474]: 2026-01-05 15:11:15.692 185478 WARNING nova.compute.manager [req-e897a6c3-65b0-4f19-993f-916c625c4791 req-d303ebbb-8250-468e-9494-3f6e3437e575 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state active and task_state None.
Jan 05 15:11:16 compute-0 nova_compute[185474]: 2026-01-05 15:11:16.011 185478 DEBUG nova.network.neutron [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.763 185478 DEBUG nova.network.neutron [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.788 185478 DEBUG oslo_concurrency.lockutils [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.790 185478 DEBUG nova.compute.manager [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.791 185478 DEBUG nova.compute.manager [None req-060a1f37-15da-4289-b18e-cfdf48f74ca5 f2d114b57ba04fe69b1c1c673fb3da52 47a5a3a457584254b36f5f2118cf6568 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] network_info to inject: |[{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.862 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.863 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.864 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.864 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.865 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.866 185478 WARNING nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state active and task_state None.
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.866 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Received event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.867 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing instance network info cache due to event network-changed-a5cac4ea-b043-4a43-9bef-a37897937741. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.868 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.868 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:11:17 compute-0 nova_compute[185474]: 2026-01-05 15:11:17.869 185478 DEBUG nova.network.neutron [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Refreshing network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.228 185478 DEBUG nova.network.neutron [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updated VIF entry in instance network info cache for port a5cac4ea-b043-4a43-9bef-a37897937741. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.229 185478 DEBUG nova.network.neutron [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.261 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.262 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.262 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.262 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 WARNING nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state active and task_state None.
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.263 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.264 185478 DEBUG oslo_concurrency.lockutils [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.264 185478 DEBUG nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.264 185478 WARNING nova.compute.manager [req-eb80782d-3f3c-47f2-9950-660bc4d964ef req-7c3ce655-59ba-4cec-b155-bf86c7a4c17b 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state active and task_state None.
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.404 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:19 compute-0 ovn_controller[97763]: 2026-01-05T15:11:19Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:dc:0e 10.100.0.13
Jan 05 15:11:19 compute-0 nova_compute[185474]: 2026-01-05 15:11:19.617 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:24 compute-0 nova_compute[185474]: 2026-01-05 15:11:24.409 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:24 compute-0 nova_compute[185474]: 2026-01-05 15:11:24.439 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:24 compute-0 nova_compute[185474]: 2026-01-05 15:11:24.439 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 15:11:24 compute-0 podman[251782]: 2026-01-05 15:11:24.609852636 +0000 UTC m=+0.090101073 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:11:24 compute-0 nova_compute[185474]: 2026-01-05 15:11:24.620 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:24 compute-0 podman[251781]: 2026-01-05 15:11:24.630800918 +0000 UTC m=+0.108535428 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 05 15:11:27 compute-0 podman[251839]: 2026-01-05 15:11:27.746416411 +0000 UTC m=+0.153794919 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, vcs-type=git, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Jan 05 15:11:29 compute-0 nova_compute[185474]: 2026-01-05 15:11:29.414 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:29 compute-0 nova_compute[185474]: 2026-01-05 15:11:29.624 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:29 compute-0 podman[201880]: time="2026-01-05T15:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:11:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30974 "" "Go-http-client/1.1"
Jan 05 15:11:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5308 "" "Go-http-client/1.1"
Jan 05 15:11:31 compute-0 openstack_network_exporter[205179]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:11:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:11:31 compute-0 openstack_network_exporter[205179]: ERROR   15:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:11:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:11:34 compute-0 nova_compute[185474]: 2026-01-05 15:11:34.417 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:34 compute-0 nova_compute[185474]: 2026-01-05 15:11:34.629 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:37 compute-0 podman[251859]: 2026-01-05 15:11:37.702186211 +0000 UTC m=+0.161712251 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 15:11:39 compute-0 nova_compute[185474]: 2026-01-05 15:11:39.421 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:39 compute-0 nova_compute[185474]: 2026-01-05 15:11:39.632 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:43 compute-0 podman[251878]: 2026-01-05 15:11:43.632379981 +0000 UTC m=+0.114889757 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, version=9.6, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 05 15:11:44 compute-0 nova_compute[185474]: 2026-01-05 15:11:44.426 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:44 compute-0 nova_compute[185474]: 2026-01-05 15:11:44.636 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:44 compute-0 podman[251899]: 2026-01-05 15:11:44.791094087 +0000 UTC m=+0.086542758 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:11:44 compute-0 podman[251898]: 2026-01-05 15:11:44.829485185 +0000 UTC m=+0.126163499 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:11:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:44.830 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:44.831 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:44.832 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:44 compute-0 podman[251910]: 2026-01-05 15:11:44.907919265 +0000 UTC m=+0.166645213 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 15:11:45 compute-0 ovn_controller[97763]: 2026-01-05T15:11:45Z|00123|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 05 15:11:49 compute-0 nova_compute[185474]: 2026-01-05 15:11:49.429 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:49 compute-0 nova_compute[185474]: 2026-01-05 15:11:49.640 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.943 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.944 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.944 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.945 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.945 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.947 185478 INFO nova.compute.manager [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Terminating instance
Jan 05 15:11:53 compute-0 nova_compute[185474]: 2026-01-05 15:11:53.948 185478 DEBUG nova.compute.manager [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 15:11:53 compute-0 kernel: tap5d68d02c-72 (unregistering): left promiscuous mode
Jan 05 15:11:53 compute-0 NetworkManager[56139]: <info>  [1767625913.9906] device (tap5d68d02c-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.005 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 ovn_controller[97763]: 2026-01-05T15:11:54Z|00124|binding|INFO|Releasing lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be from this chassis (sb_readonly=0)
Jan 05 15:11:54 compute-0 ovn_controller[97763]: 2026-01-05T15:11:54Z|00125|binding|INFO|Setting lport 5d68d02c-7204-4217-adec-1d5b6f2fc0be down in Southbound
Jan 05 15:11:54 compute-0 ovn_controller[97763]: 2026-01-05T15:11:54Z|00126|binding|INFO|Removing iface tap5d68d02c-72 ovn-installed in OVS
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.011 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.035 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:dc:0e 10.100.0.13'], port_security=['fa:16:3e:4d:dc:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f321f76-b34e-4ad0-b6c4-285f4470baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7313966f-87a0-413c-b336-702cd552f4fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23dc0aab10ca466cb1b268ba1c456ac1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '347728ff-d8cb-45fb-b3a1-665f18a6be0c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7084d359-9113-48e1-9593-68ec04f6720b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>], logical_port=5d68d02c-7204-4217-adec-1d5b6f2fc0be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fbb88ba7670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.039 107222 INFO neutron.agent.ovn.metadata.agent [-] Port 5d68d02c-7204-4217-adec-1d5b6f2fc0be in datapath 7313966f-87a0-413c-b336-702cd552f4fe unbound from our chassis
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.044 107222 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7313966f-87a0-413c-b336-702cd552f4fe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.046 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff83f75-9194-460b-97cd-3187a765cc93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.047 107222 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe namespace which is not needed anymore
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.050 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 05 15:11:54 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000006.scope: Consumed 44.758s CPU time.
Jan 05 15:11:54 compute-0 systemd-machined[156786]: Machine qemu-11-instance-00000006 terminated.
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.185 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.193 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [NOTICE]   (251595) : haproxy version is 2.8.14-c23fe91
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [NOTICE]   (251595) : path to executable is /usr/sbin/haproxy
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [WARNING]  (251595) : Exiting Master process...
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [WARNING]  (251595) : Exiting Master process...
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [ALERT]    (251595) : Current worker (251597) exited with code 143 (Terminated)
Jan 05 15:11:54 compute-0 neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe[251591]: [WARNING]  (251595) : All workers exited. Exiting... (0)
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.245 185478 DEBUG nova.compute.manager [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.246 185478 DEBUG oslo_concurrency.lockutils [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.246 185478 DEBUG oslo_concurrency.lockutils [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.247 185478 DEBUG oslo_concurrency.lockutils [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.247 185478 DEBUG nova.compute.manager [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.247 185478 DEBUG nova.compute.manager [req-7e3b3968-8b8e-4fc2-903c-fdc1bcd8d1ba req-650bdac4-9629-4867-8d64-eeab6242b68c 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-unplugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 15:11:54 compute-0 systemd[1]: libpod-c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523.scope: Deactivated successfully.
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.256 185478 INFO nova.virt.libvirt.driver [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Instance destroyed successfully.
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.256 185478 DEBUG nova.objects.instance [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lazy-loading 'resources' on Instance uuid 9f321f76-b34e-4ad0-b6c4-285f4470baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:11:54 compute-0 podman[251987]: 2026-01-05 15:11:54.256747641 +0000 UTC m=+0.085235943 container died c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.280 185478 DEBUG nova.virt.libvirt.vif [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T15:08:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-864778593',display_name='tempest-ServerActionsTestJSON-server-864778593',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-864778593',id=6,image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLSqj77vlD6kVeek16cO/Hhu/zNaQXeoSK+F7dXcoh+Z9es9Ys2ZMWKCWVSXggTtqS4B5KUVwu17u1PvVEzOSYCL9wnO8by7z4oz/x0vi0Pzvt3LMGG6NC/ghGg3ZVB5ig==',key_name='tempest-keypair-763020533',keypairs=<?>,launch_index=0,launched_at=2026-01-05T15:09:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='23dc0aab10ca466cb1b268ba1c456ac1',ramdisk_id='',reservation_id='r-75f25068',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e22fea2c-125b-4347-8d96-267cb6a6831b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-292757575',owner_user_name='tempest-ServerActionsTestJSON-292757575-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T15:10:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b1c84f20ffdd429d9965ed731c086635',uuid=9f321f76-b34e-4ad0-b6c4-285f4470baa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.281 185478 DEBUG nova.network.os_vif_util [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converting VIF {"id": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "address": "fa:16:3e:4d:dc:0e", "network": {"id": "7313966f-87a0-413c-b336-702cd552f4fe", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1288657617-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "23dc0aab10ca466cb1b268ba1c456ac1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d68d02c-72", "ovs_interfaceid": "5d68d02c-7204-4217-adec-1d5b6f2fc0be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.282 185478 DEBUG nova.network.os_vif_util [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.282 185478 DEBUG os_vif [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.284 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.284 185478 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d68d02c-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.292 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.295 185478 INFO os_vif [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:dc:0e,bridge_name='br-int',has_traffic_filtering=True,id=5d68d02c-7204-4217-adec-1d5b6f2fc0be,network=Network(7313966f-87a0-413c-b336-702cd552f4fe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d68d02c-72')
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.296 185478 INFO nova.virt.libvirt.driver [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Deleting instance files /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0_del
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.297 185478 INFO nova.virt.libvirt.driver [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Deletion of /var/lib/nova/instances/9f321f76-b34e-4ad0-b6c4-285f4470baa0_del complete
Jan 05 15:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523-userdata-shm.mount: Deactivated successfully.
Jan 05 15:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9cfc77a2933414c35ebb9c9b2a64d824c6aa75fdc706b9e7c4f601fdc5527ab-merged.mount: Deactivated successfully.
Jan 05 15:11:54 compute-0 podman[251987]: 2026-01-05 15:11:54.317906459 +0000 UTC m=+0.146394751 container cleanup c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 15:11:54 compute-0 systemd[1]: libpod-conmon-c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523.scope: Deactivated successfully.
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.359 185478 INFO nova.compute.manager [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Took 0.41 seconds to destroy the instance on the hypervisor.
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.359 185478 DEBUG oslo.service.loopingcall [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.360 185478 DEBUG nova.compute.manager [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.360 185478 DEBUG nova.network.neutron [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 15:11:54 compute-0 podman[252028]: 2026-01-05 15:11:54.40946131 +0000 UTC m=+0.064177719 container remove c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.417 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[c05b03f8-e856-442d-963f-7404e361ff4d]: (4, ('Mon Jan  5 03:11:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe (c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523)\nc42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523\nMon Jan  5 03:11:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe (c42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523)\nc42f485ff83092d0ae75b131ea7dfe12b80a0f5e54df20248b14b3413ce5b523\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.419 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[cf50f896-3b1c-41e8-8e78-9017ab1e879f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.420 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7313966f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.422 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 kernel: tap7313966f-80: left promiscuous mode
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.427 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.431 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[869c9144-170d-4205-9141-89ed56d055f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 nova_compute[185474]: 2026-01-05 15:11:54.440 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.444 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[73bb10a0-37ca-4899-8168-687e75547ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.446 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[a307f27a-ba5b-4633-84a6-5a318b1fd476]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.466 239805 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf740e5-a572-4e5a-9122-910f1b843754]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517178, 'reachable_time': 38279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252045, 'error': None, 'target': 'ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.471 107613 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7313966f-87a0-413c-b336-702cd552f4fe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 15:11:54 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:11:54.471 107613 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1a2f85-a68c-4fea-8b83-f3e7cee792c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 15:11:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d7313966f\x2d87a0\x2d413c\x2db336\x2d702cd552f4fe.mount: Deactivated successfully.
Jan 05 15:11:55 compute-0 nova_compute[185474]: 2026-01-05 15:11:55.420 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:55 compute-0 nova_compute[185474]: 2026-01-05 15:11:55.421 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:11:55 compute-0 podman[252047]: 2026-01-05 15:11:55.643138803 +0000 UTC m=+0.120383795 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:11:55 compute-0 podman[252046]: 2026-01-05 15:11:55.667142896 +0000 UTC m=+0.150400359 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Jan 05 15:11:56 compute-0 nova_compute[185474]: 2026-01-05 15:11:56.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.378 185478 DEBUG nova.compute.manager [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.378 185478 DEBUG oslo_concurrency.lockutils [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Acquiring lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.379 185478 DEBUG oslo_concurrency.lockutils [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.380 185478 DEBUG oslo_concurrency.lockutils [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.381 185478 DEBUG nova.compute.manager [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] No waiting events found dispatching network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.381 185478 WARNING nova.compute.manager [req-b93c3f03-4040-4155-b673-548a5a107b79 req-fbc47d1a-0699-4ba5-90a5-4652acb33143 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received unexpected event network-vif-plugged-5d68d02c-7204-4217-adec-1d5b6f2fc0be for instance with vm_state active and task_state deleting.
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.426 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.427 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.428 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.429 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.560 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.671 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.673 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.782 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.794 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.892 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.894 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:11:57 compute-0 nova_compute[185474]: 2026-01-05 15:11:57.978 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.572 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.576 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4993MB free_disk=72.3216667175293GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.577 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.578 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:58 compute-0 podman[252103]: 2026-01-05 15:11:58.624338457 +0000 UTC m=+0.102203467 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, io.openshift.expose-services=)
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.784 185478 DEBUG nova.network.neutron [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.803 185478 INFO nova.compute.manager [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Took 4.44 seconds to deallocate network for instance.
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.824 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.825 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 00943943-b19d-4862-8829-45a5cc14e988 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.825 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.825 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.825 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.842 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.906 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing inventories for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.995 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating ProviderTree inventory for provider 81b80649-e249-4f86-9377-abfcf7fc43dd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 15:11:58 compute-0 nova_compute[185474]: 2026-01-05 15:11:58.996 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Updating inventory in ProviderTree for provider 81b80649-e249-4f86-9377-abfcf7fc43dd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.020 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing aggregate associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.046 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Refreshing trait associations for resource provider 81b80649-e249-4f86-9377-abfcf7fc43dd, traits: HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_RESCUE_BFV,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_F16C,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NODE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.128 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.145 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.183 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.184 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.185 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.276 185478 DEBUG nova.compute.provider_tree [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.289 185478 DEBUG nova.scheduler.client.report [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.293 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.315 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.343 185478 INFO nova.scheduler.client.report [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Deleted allocations for instance 9f321f76-b34e-4ad0-b6c4-285f4470baa0
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.444 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.600 185478 DEBUG nova.compute.manager [req-acede03b-57d4-484a-b4e7-15dbb9e157f9 req-7f72655d-4638-4757-a4bb-42325d1600c6 52335c09be794619a39811a7d2ef382c 17aa6d7188c842f19e6ac116a727a876 - - default default] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Received event network-vif-deleted-5d68d02c-7204-4217-adec-1d5b6f2fc0be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 15:11:59 compute-0 nova_compute[185474]: 2026-01-05 15:11:59.606 185478 DEBUG oslo_concurrency.lockutils [None req-c246b655-0994-4f73-8591-2e570bd63ad0 b1c84f20ffdd429d9965ed731c086635 23dc0aab10ca466cb1b268ba1c456ac1 - - default default] Lock "9f321f76-b34e-4ad0-b6c4-285f4470baa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:11:59 compute-0 podman[201880]: time="2026-01-05T15:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:11:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:11:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4847 "" "Go-http-client/1.1"
Jan 05 15:12:01 compute-0 nova_compute[185474]: 2026-01-05 15:12:01.187 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:01 compute-0 nova_compute[185474]: 2026-01-05 15:12:01.187 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:01 compute-0 anacron[30828]: Job `cron.monthly' started
Jan 05 15:12:01 compute-0 anacron[30828]: Job `cron.monthly' terminated
Jan 05 15:12:01 compute-0 anacron[30828]: Normal exit (3 jobs run)
Jan 05 15:12:01 compute-0 nova_compute[185474]: 2026-01-05 15:12:01.395 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:01 compute-0 openstack_network_exporter[205179]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:12:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:12:01 compute-0 openstack_network_exporter[205179]: ERROR   15:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:12:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.611 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.612 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.613 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:12:02 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:02.753 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:12:02 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:02.756 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:12:02 compute-0 nova_compute[185474]: 2026-01-05 15:12:02.756 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.296 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.447 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.450 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.505 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.506 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:12:04 compute-0 nova_compute[185474]: 2026-01-05 15:12:04.508 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:04.759 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:12:05 compute-0 ovn_controller[97763]: 2026-01-05T15:12:05Z|00127|binding|INFO|Releasing lport 02807d47-c59f-4c92-8290-7fec7d1bc7e4 from this chassis (sb_readonly=0)
Jan 05 15:12:05 compute-0 ovn_controller[97763]: 2026-01-05T15:12:05Z|00128|binding|INFO|Releasing lport 4cc48a5f-b4b4-4326-a167-b706318b3e05 from this chassis (sb_readonly=0)
Jan 05 15:12:05 compute-0 nova_compute[185474]: 2026-01-05 15:12:05.732 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:08 compute-0 podman[252125]: 2026-01-05 15:12:08.673049889 +0000 UTC m=+0.153210994 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 15:12:09 compute-0 nova_compute[185474]: 2026-01-05 15:12:09.250 185478 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767625914.2465487, 9f321f76-b34e-4ad0-b6c4-285f4470baa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 15:12:09 compute-0 nova_compute[185474]: 2026-01-05 15:12:09.252 185478 INFO nova.compute.manager [-] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] VM Stopped (Lifecycle Event)
Jan 05 15:12:09 compute-0 nova_compute[185474]: 2026-01-05 15:12:09.277 185478 DEBUG nova.compute.manager [None req-9543aa0c-4598-45f5-b981-442fae5f85fb - - - - - -] [instance: 9f321f76-b34e-4ad0-b6c4-285f4470baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 15:12:09 compute-0 nova_compute[185474]: 2026-01-05 15:12:09.303 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:09 compute-0 nova_compute[185474]: 2026-01-05 15:12:09.449 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:13 compute-0 nova_compute[185474]: 2026-01-05 15:12:13.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:13 compute-0 nova_compute[185474]: 2026-01-05 15:12:13.432 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:14 compute-0 nova_compute[185474]: 2026-01-05 15:12:14.307 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:14 compute-0 nova_compute[185474]: 2026-01-05 15:12:14.452 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:14 compute-0 podman[252145]: 2026-01-05 15:12:14.627349079 +0000 UTC m=+0.113577752 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Jan 05 15:12:15 compute-0 podman[252166]: 2026-01-05 15:12:15.582561219 +0000 UTC m=+0.076547697 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 15:12:15 compute-0 podman[252167]: 2026-01-05 15:12:15.599373477 +0000 UTC m=+0.081920077 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 15:12:15 compute-0 podman[252168]: 2026-01-05 15:12:15.644592635 +0000 UTC m=+0.123909810 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 15:12:19 compute-0 nova_compute[185474]: 2026-01-05 15:12:19.313 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:19 compute-0 nova_compute[185474]: 2026-01-05 15:12:19.457 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:24 compute-0 nova_compute[185474]: 2026-01-05 15:12:24.318 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:24 compute-0 nova_compute[185474]: 2026-01-05 15:12:24.460 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:26 compute-0 podman[252230]: 2026-01-05 15:12:26.605647481 +0000 UTC m=+0.087492152 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 15:12:26 compute-0 podman[252231]: 2026-01-05 15:12:26.628367134 +0000 UTC m=+0.097346479 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 15:12:29 compute-0 nova_compute[185474]: 2026-01-05 15:12:29.320 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:29 compute-0 nova_compute[185474]: 2026-01-05 15:12:29.463 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:29 compute-0 podman[252272]: 2026-01-05 15:12:29.601848593 +0000 UTC m=+0.089346369 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, version=9.4, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Jan 05 15:12:29 compute-0 podman[201880]: time="2026-01-05T15:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:12:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:12:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4851 "" "Go-http-client/1.1"
Jan 05 15:12:31 compute-0 openstack_network_exporter[205179]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:12:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:12:31 compute-0 openstack_network_exporter[205179]: ERROR   15:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:12:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:12:34 compute-0 nova_compute[185474]: 2026-01-05 15:12:34.323 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:34 compute-0 nova_compute[185474]: 2026-01-05 15:12:34.465 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.759 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.760 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.765 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.766 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.767 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.768 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb7df2d50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.772 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8', 'name': 'tempest-TestNetworkBasicOps-server-141186871', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '134d57b916be4f4ca80b3a59630701e5', 'user_id': '8d883f36e32b4c71b56683d7117547d8', 'hostId': 'dd91e800a8ccaf570defe3489ea6eac358fb3fd9b78a6f5299436f84', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.778 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '00943943-b19d-4862-8829-45a5cc14e988', 'name': 'tempest-AttachInterfacesUnderV243Test-server-2119923937', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '47a5a3a457584254b36f5f2118cf6568', 'user_id': 'f2d114b57ba04fe69b1c1c673fb3da52', 'hostId': 'e1b5aea2779c08b8229a0ef33c93fbf2dcc56b160d07dca2bcd12122', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.779 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T15:12:37.780123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.854 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 2126627005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.855 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.933 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 4134292620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.934 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.936 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 647796318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.937 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 52531640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.937 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 548886735 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.938 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 56692568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.941 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 1114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T15:12:37.936445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T15:12:37.940752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.941 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.942 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.942 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.944 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.944 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.944 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.944 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.945 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T15:12:37.944775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.971 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.972 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:37.999 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.000 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.001 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.002 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T15:12:38.002605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.009 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.019 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.020 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.021 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.021 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.022 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 73093120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.022 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.023 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 73117696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.023 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.025 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.025 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.025 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.026 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.026 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T15:12:38.021778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.026 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.027 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.029 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T15:12:38.026531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T15:12:38.030151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.031 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.031 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.032 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.033 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.035 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.036 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 30820864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.036 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.036 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.037 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.038 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T15:12:38.035973) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T15:12:38.038551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.072 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/cpu volume: 34160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.113 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/cpu volume: 35620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.116 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.118 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T15:12:38.117784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.119 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.123 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.125 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T15:12:38.122943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.127 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.127 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.128 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T15:12:38.129951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.132 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.133 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T15:12:38.133352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.133 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.134 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.135 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.136 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.136 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.137 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T15:12:38.136582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.139 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.140 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.140 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T15:12:38.139820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.141 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.142 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.142 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T15:12:38.141725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.142 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.143 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes volume: 18782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T15:12:38.143623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.144 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.144 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.144 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.145 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.145 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.145 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.145 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.146 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.147 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T15:12:38.145230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T15:12:38.146874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.148 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/memory.usage volume: 46.5625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T15:12:38.148604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.149 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/memory.usage volume: 42.89453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.150 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.151 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes volume: 23129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.152 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T15:12:38.150281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T15:12:38.152445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.154 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.155 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T15:12:38.154040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.156 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.157 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.157 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.157 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.157 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.158 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.159 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.160 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.161 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:12:38.161 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:12:39 compute-0 nova_compute[185474]: 2026-01-05 15:12:39.325 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:39 compute-0 nova_compute[185474]: 2026-01-05 15:12:39.469 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:39 compute-0 podman[252293]: 2026-01-05 15:12:39.606066207 +0000 UTC m=+0.090660513 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.build-date=20251224)
Jan 05 15:12:44 compute-0 nova_compute[185474]: 2026-01-05 15:12:44.330 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:44 compute-0 nova_compute[185474]: 2026-01-05 15:12:44.471 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:44 compute-0 podman[252313]: 2026-01-05 15:12:44.776319048 +0000 UTC m=+0.077451100 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41)
Jan 05 15:12:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:44.832 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:12:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:44.833 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:12:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:12:44.834 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:12:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 15:12:46 compute-0 podman[252334]: 2026-01-05 15:12:46.153833316 +0000 UTC m=+0.100520932 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:12:46 compute-0 podman[252335]: 2026-01-05 15:12:46.174240418 +0000 UTC m=+0.127727591 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 15:12:46 compute-0 podman[252336]: 2026-01-05 15:12:46.214392244 +0000 UTC m=+0.158110632 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 15:12:49 compute-0 nova_compute[185474]: 2026-01-05 15:12:49.335 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:49 compute-0 nova_compute[185474]: 2026-01-05 15:12:49.474 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:54 compute-0 nova_compute[185474]: 2026-01-05 15:12:54.339 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:54 compute-0 nova_compute[185474]: 2026-01-05 15:12:54.477 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:56 compute-0 nova_compute[185474]: 2026-01-05 15:12:56.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:56 compute-0 nova_compute[185474]: 2026-01-05 15:12:56.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.427 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.428 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.428 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.429 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.508 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.581 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.582 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:12:57 compute-0 podman[252402]: 2026-01-05 15:12:57.59724475 +0000 UTC m=+0.079555835 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:12:57 compute-0 podman[252401]: 2026-01-05 15:12:57.61373541 +0000 UTC m=+0.087948064 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.661 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.675 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.744 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.745 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:12:57 compute-0 nova_compute[185474]: 2026-01-05 15:12:57.806 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.202 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.203 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4999MB free_disk=72.32166290283203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.203 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.203 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.273 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 00943943-b19d-4862-8829-45a5cc14e988 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.273 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.273 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.274 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.338 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.352 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.373 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:12:58 compute-0 nova_compute[185474]: 2026-01-05 15:12:58.374 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:12:59 compute-0 nova_compute[185474]: 2026-01-05 15:12:59.342 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:59 compute-0 nova_compute[185474]: 2026-01-05 15:12:59.373 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:12:59 compute-0 nova_compute[185474]: 2026-01-05 15:12:59.480 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:12:59 compute-0 podman[201880]: time="2026-01-05T15:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:12:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:12:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4853 "" "Go-http-client/1.1"
Jan 05 15:13:00 compute-0 podman[252454]: 2026-01-05 15:13:00.62589661 +0000 UTC m=+0.114325162 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64)
Jan 05 15:13:01 compute-0 nova_compute[185474]: 2026-01-05 15:13:01.396 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:01 compute-0 nova_compute[185474]: 2026-01-05 15:13:01.397 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:01 compute-0 openstack_network_exporter[205179]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:13:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:13:01 compute-0 openstack_network_exporter[205179]: ERROR   15:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:13:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:13:02 compute-0 nova_compute[185474]: 2026-01-05 15:13:02.401 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:02 compute-0 nova_compute[185474]: 2026-01-05 15:13:02.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:13:02 compute-0 nova_compute[185474]: 2026-01-05 15:13:02.402 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 15:13:03 compute-0 nova_compute[185474]: 2026-01-05 15:13:03.269 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:13:03 compute-0 nova_compute[185474]: 2026-01-05 15:13:03.270 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:13:03 compute-0 nova_compute[185474]: 2026-01-05 15:13:03.271 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:13:03 compute-0 nova_compute[185474]: 2026-01-05 15:13:03.271 185478 DEBUG nova.objects.instance [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00943943-b19d-4862-8829-45a5cc14e988 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 15:13:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:04.254 107222 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:75:b2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '8a:45:25:6a:82:bc'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 15:13:04 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:04.257 107222 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.257 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.346 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.483 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.543 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updating instance_info_cache with network_info: [{"id": "a5cac4ea-b043-4a43-9bef-a37897937741", "address": "fa:16:3e:cb:a0:eb", "network": {"id": "581293f8-9c7d-4afe-8455-8275f58d2374", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1370621257-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47a5a3a457584254b36f5f2118cf6568", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5cac4ea-b0", "ovs_interfaceid": "a5cac4ea-b043-4a43-9bef-a37897937741", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.561 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-00943943-b19d-4862-8829-45a5cc14e988" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.562 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: 00943943-b19d-4862-8829-45a5cc14e988] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:13:04 compute-0 nova_compute[185474]: 2026-01-05 15:13:04.563 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:05 compute-0 nova_compute[185474]: 2026-01-05 15:13:05.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:09 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:09.264 107222 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=82a66401-c715-4a23-aa01-55f1bbd6f669, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 15:13:09 compute-0 nova_compute[185474]: 2026-01-05 15:13:09.349 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:09 compute-0 nova_compute[185474]: 2026-01-05 15:13:09.486 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:10 compute-0 podman[252473]: 2026-01-05 15:13:10.60260301 +0000 UTC m=+0.093041156 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 15:13:13 compute-0 nova_compute[185474]: 2026-01-05 15:13:13.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:14 compute-0 nova_compute[185474]: 2026-01-05 15:13:14.354 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:14 compute-0 nova_compute[185474]: 2026-01-05 15:13:14.489 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:15 compute-0 podman[252493]: 2026-01-05 15:13:15.656870974 +0000 UTC m=+0.138980263 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, distribution-scope=public)
Jan 05 15:13:16 compute-0 podman[252515]: 2026-01-05 15:13:16.633364658 +0000 UTC m=+0.104899906 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 05 15:13:16 compute-0 podman[252514]: 2026-01-05 15:13:16.63808608 +0000 UTC m=+0.110781168 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 15:13:16 compute-0 podman[252516]: 2026-01-05 15:13:16.703169347 +0000 UTC m=+0.168060191 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 15:13:19 compute-0 nova_compute[185474]: 2026-01-05 15:13:19.360 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:19 compute-0 nova_compute[185474]: 2026-01-05 15:13:19.493 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:24 compute-0 nova_compute[185474]: 2026-01-05 15:13:24.364 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:24 compute-0 nova_compute[185474]: 2026-01-05 15:13:24.497 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:28 compute-0 podman[252581]: 2026-01-05 15:13:28.623615068 +0000 UTC m=+0.103739485 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 15:13:28 compute-0 podman[252582]: 2026-01-05 15:13:28.658645211 +0000 UTC m=+0.122629627 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 15:13:28 compute-0 sshd-session[252579]: Invalid user solana from 165.22.168.95 port 58224
Jan 05 15:13:28 compute-0 sshd-session[252579]: Connection closed by invalid user solana 165.22.168.95 port 58224 [preauth]
Jan 05 15:13:29 compute-0 nova_compute[185474]: 2026-01-05 15:13:29.367 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:29 compute-0 nova_compute[185474]: 2026-01-05 15:13:29.501 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:29 compute-0 podman[201880]: time="2026-01-05T15:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:13:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:13:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4846 "" "Go-http-client/1.1"
Jan 05 15:13:31 compute-0 openstack_network_exporter[205179]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:13:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:13:31 compute-0 openstack_network_exporter[205179]: ERROR   15:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:13:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:13:31 compute-0 podman[252623]: 2026-01-05 15:13:31.659450054 +0000 UTC m=+0.132181847 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, architecture=x86_64, release=1214.1726694543, vcs-type=git, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=kepler, io.openshift.expose-services=, release-0.7.12=)
Jan 05 15:13:34 compute-0 nova_compute[185474]: 2026-01-05 15:13:34.372 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:34 compute-0 nova_compute[185474]: 2026-01-05 15:13:34.503 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:38 compute-0 ovn_controller[97763]: 2026-01-05T15:13:38Z|00129|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 05 15:13:39 compute-0 nova_compute[185474]: 2026-01-05 15:13:39.378 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:39 compute-0 nova_compute[185474]: 2026-01-05 15:13:39.506 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:41 compute-0 podman[252643]: 2026-01-05 15:13:41.621462915 +0000 UTC m=+0.109906317 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 15:13:44 compute-0 nova_compute[185474]: 2026-01-05 15:13:44.382 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:44 compute-0 nova_compute[185474]: 2026-01-05 15:13:44.509 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:44.833 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:13:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:44.834 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:13:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:13:44.835 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:13:46 compute-0 podman[252663]: 2026-01-05 15:13:46.627717696 +0000 UTC m=+0.109338812 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Jan 05 15:13:47 compute-0 podman[252686]: 2026-01-05 15:13:47.599142457 +0000 UTC m=+0.078025405 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 15:13:47 compute-0 podman[252685]: 2026-01-05 15:13:47.612672278 +0000 UTC m=+0.088178938 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 15:13:47 compute-0 podman[252687]: 2026-01-05 15:13:47.688876166 +0000 UTC m=+0.152931158 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 15:13:49 compute-0 nova_compute[185474]: 2026-01-05 15:13:49.387 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:49 compute-0 nova_compute[185474]: 2026-01-05 15:13:49.511 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:54 compute-0 nova_compute[185474]: 2026-01-05 15:13:54.391 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:54 compute-0 nova_compute[185474]: 2026-01-05 15:13:54.513 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.399 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.400 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.437 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.438 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.440 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.440 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.537 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.644 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.645 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.738 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.747 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.817 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.818 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 15:13:57 compute-0 nova_compute[185474]: 2026-01-05 15:13:57.892 185478 DEBUG oslo_concurrency.processutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/00943943-b19d-4862-8829-45a5cc14e988/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.285 185478 WARNING nova.virt.libvirt.driver [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.287 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4977MB free_disk=72.32166290283203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.287 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.287 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.363 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance 00943943-b19d-4862-8829-45a5cc14e988 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.363 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Instance e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.363 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.364 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.420 185478 DEBUG nova.compute.provider_tree [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed in ProviderTree for provider: 81b80649-e249-4f86-9377-abfcf7fc43dd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.437 185478 DEBUG nova.scheduler.client.report [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Inventory has not changed for provider 81b80649-e249-4f86-9377-abfcf7fc43dd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.439 185478 DEBUG nova.compute.resource_tracker [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 15:13:58 compute-0 nova_compute[185474]: 2026-01-05 15:13:58.439 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 15:13:59 compute-0 nova_compute[185474]: 2026-01-05 15:13:59.396 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:59 compute-0 nova_compute[185474]: 2026-01-05 15:13:59.516 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:13:59 compute-0 podman[252764]: 2026-01-05 15:13:59.60046717 +0000 UTC m=+0.083226471 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 15:13:59 compute-0 podman[252763]: 2026-01-05 15:13:59.6369213 +0000 UTC m=+0.127646859 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 15:13:59 compute-0 podman[201880]: time="2026-01-05T15:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:13:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:13:59 compute-0 podman[201880]: @ - - [05/Jan/2026:15:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4852 "" "Go-http-client/1.1"
Jan 05 15:14:01 compute-0 openstack_network_exporter[205179]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:14:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:14:01 compute-0 openstack_network_exporter[205179]: ERROR   15:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:14:01 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:14:01 compute-0 nova_compute[185474]: 2026-01-05 15:14:01.441 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:01 compute-0 nova_compute[185474]: 2026-01-05 15:14:01.441 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:02 compute-0 nova_compute[185474]: 2026-01-05 15:14:02.399 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:02 compute-0 podman[252806]: 2026-01-05 15:14:02.640117835 +0000 UTC m=+0.123981993 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Jan 05 15:14:03 compute-0 nova_compute[185474]: 2026-01-05 15:14:03.394 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:04 compute-0 nova_compute[185474]: 2026-01-05 15:14:04.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:04 compute-0 nova_compute[185474]: 2026-01-05 15:14:04.400 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 15:14:04 compute-0 nova_compute[185474]: 2026-01-05 15:14:04.402 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:04 compute-0 nova_compute[185474]: 2026-01-05 15:14:04.519 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:05 compute-0 nova_compute[185474]: 2026-01-05 15:14:05.284 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquiring lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 15:14:05 compute-0 nova_compute[185474]: 2026-01-05 15:14:05.285 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Acquired lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 15:14:05 compute-0 nova_compute[185474]: 2026-01-05 15:14:05.285 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 15:14:07 compute-0 nova_compute[185474]: 2026-01-05 15:14:07.310 185478 DEBUG nova.network.neutron [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updating instance_info_cache with network_info: [{"id": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "address": "fa:16:3e:d8:1f:9a", "network": {"id": "a4d9427d-0bca-46c0-aaca-aa38c0dca8a5", "bridge": "br-int", "label": "tempest-network-smoke--1910768748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134d57b916be4f4ca80b3a59630701e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39d7dd25-00", "ovs_interfaceid": "39d7dd25-004e-46d1-b35c-19e1d39b90b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 15:14:07 compute-0 nova_compute[185474]: 2026-01-05 15:14:07.342 185478 DEBUG oslo_concurrency.lockutils [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Releasing lock "refresh_cache-e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 15:14:07 compute-0 nova_compute[185474]: 2026-01-05 15:14:07.343 185478 DEBUG nova.compute.manager [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] [instance: e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 15:14:07 compute-0 nova_compute[185474]: 2026-01-05 15:14:07.345 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:09 compute-0 nova_compute[185474]: 2026-01-05 15:14:09.407 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:09 compute-0 nova_compute[185474]: 2026-01-05 15:14:09.523 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:12 compute-0 podman[252825]: 2026-01-05 15:14:12.59377214 +0000 UTC m=+0.078166348 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 15:14:14 compute-0 nova_compute[185474]: 2026-01-05 15:14:14.412 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:14 compute-0 nova_compute[185474]: 2026-01-05 15:14:14.526 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:15 compute-0 nova_compute[185474]: 2026-01-05 15:14:15.398 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:15 compute-0 nova_compute[185474]: 2026-01-05 15:14:15.547 185478 DEBUG oslo_service.periodic_task [None req-8b3962d0-474b-4abf-af73-43901ad7359f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 15:14:17 compute-0 podman[252843]: 2026-01-05 15:14:17.610462093 +0000 UTC m=+0.092489142 container health_status 41113f0d848459e1957429133d41363b15545598b15628a721bcd11e2965361f (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 15:14:17 compute-0 podman[252864]: 2026-01-05 15:14:17.734916097 +0000 UTC m=+0.066734671 container health_status c18db406f22497b1c066fcefc8ef9388ebb45521c81bc1e52b7ca857ee2f9827 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 15:14:17 compute-0 podman[252863]: 2026-01-05 15:14:17.778349238 +0000 UTC m=+0.108065047 container health_status 07cb82cbc1224de4283397ed41aefaa2af192a19f939317512293f7f24de921b (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 15:14:17 compute-0 podman[252898]: 2026-01-05 15:14:17.911873879 +0000 UTC m=+0.139355694 container health_status eebf71f2d2e4bfe872f36eb3715cae1f0ec3ae4db702bf3f7ea3ed9b31a3b76c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 15:14:19 compute-0 nova_compute[185474]: 2026-01-05 15:14:19.417 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:19 compute-0 nova_compute[185474]: 2026-01-05 15:14:19.529 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:24 compute-0 nova_compute[185474]: 2026-01-05 15:14:24.423 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:24 compute-0 nova_compute[185474]: 2026-01-05 15:14:24.530 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:27 compute-0 sshd-session[252932]: Accepted publickey for zuul from 192.168.122.10 port 47046 ssh2: ECDSA SHA256:Src0gfOaAHKzPWxuiFDAsbGjC1PEhpqTYgO2qdy9840
Jan 05 15:14:27 compute-0 systemd-logind[795]: New session 32 of user zuul.
Jan 05 15:14:27 compute-0 systemd[1]: Started Session 32 of User zuul.
Jan 05 15:14:27 compute-0 sshd-session[252932]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 15:14:28 compute-0 sudo[252936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 05 15:14:28 compute-0 sudo[252936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 15:14:29 compute-0 nova_compute[185474]: 2026-01-05 15:14:29.427 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:29 compute-0 nova_compute[185474]: 2026-01-05 15:14:29.533 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:29 compute-0 podman[201880]: time="2026-01-05T15:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 15:14:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29741 "" "Go-http-client/1.1"
Jan 05 15:14:29 compute-0 podman[201880]: @ - - [05/Jan/2026:15:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4856 "" "Go-http-client/1.1"
Jan 05 15:14:30 compute-0 podman[253052]: 2026-01-05 15:14:30.605898873 +0000 UTC m=+0.087912472 container health_status 97f8675d4676fe829b68c5987a4bbb5327cd07e82f61357e9dd19e01acc629ec (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 15:14:30 compute-0 podman[253056]: 2026-01-05 15:14:30.621527591 +0000 UTC m=+0.103270323 container health_status fe8e826a5d81aa190b9a60dc6fe7d79847dc43c0843ab1c24417433207f8cad5 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 15:14:31 compute-0 openstack_network_exporter[205179]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 15:14:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:14:31 compute-0 openstack_network_exporter[205179]: ERROR   15:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 15:14:31 compute-0 openstack_network_exporter[205179]: 
Jan 05 15:14:33 compute-0 podman[253147]: 2026-01-05 15:14:33.61794955 +0000 UTC m=+0.099276529 container health_status 8266a3d40ced874717e6f333e676101715ff3ff5d5fc6a9cc55f6ca5dc2b1510 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 15:14:33 compute-0 ovs-vsctl[253178]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 05 15:14:34 compute-0 nova_compute[185474]: 2026-01-05 15:14:34.430 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:34 compute-0 nova_compute[185474]: 2026-01-05 15:14:34.535 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:34 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 252960 (sos)
Jan 05 15:14:34 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 05 15:14:34 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 05 15:14:35 compute-0 virtqemud[185095]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 05 15:14:35 compute-0 virtqemud[185095]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 05 15:14:35 compute-0 virtqemud[185095]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 05 15:14:36 compute-0 crontab[253606]: (root) LIST (root)
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.759 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.760 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.760 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7faeb6710200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.762 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67103e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.763 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.764 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7faeb6822cc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.768 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8', 'name': 'tempest-TestNetworkBasicOps-server-141186871', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '134d57b916be4f4ca80b3a59630701e5', 'user_id': '8d883f36e32b4c71b56683d7117547d8', 'hostId': 'dd91e800a8ccaf570defe3489ea6eac358fb3fd9b78a6f5299436f84', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.771 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '00943943-b19d-4862-8829-45a5cc14e988', 'name': 'tempest-AttachInterfacesUnderV243Test-server-2119923937', 'flavor': {'id': '3a2fb381-0342-40f9-8eb5-089f8c9475fd', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e22fea2c-125b-4347-8d96-267cb6a6831b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '47a5a3a457584254b36f5f2118cf6568', 'user_id': 'f2d114b57ba04fe69b1c1c673fb3da52', 'hostId': 'e1b5aea2779c08b8229a0ef33c93fbf2dcc56b160d07dca2bcd12122', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.772 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T15:14:37.772501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.819 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 2126627005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.820 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.860 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 4134292620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.861 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7faeb6711b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67100b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 647796318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.862 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.latency volume: 52531640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.863 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 548886735 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.863 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.latency volume: 56692568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.863 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7faeb67100e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 1114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.864 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.865 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 1104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.865 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7faeb6710140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T15:14:37.862496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T15:14:37.864524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T15:14:37.866384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.901 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.901 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.917 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.918 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7faeb67104d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb84d5970>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T15:14:37.919707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.924 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.927 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7faeb67101a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67101d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.929 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 73093120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.929 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.929 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 73117696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T15:14:37.928891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.930 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7faeb6711940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.930 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.931 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.931 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7faeb6711850>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T15:14:37.931239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711af0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.932 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.933 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7faeb6711a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.933 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.933 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711b20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T15:14:37.932458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 30820864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T15:14:37.934141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 30521856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.934 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7faeb6710e00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb8d50b30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T15:14:37.935716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.958 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/cpu volume: 35850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.978 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/cpu volume: 37340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7faeb6710710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711340>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.979 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7faeb6712150>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T15:14:37.979438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712360>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T15:14:37.980845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.981 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.981 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7faeb6710650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7faeb6711880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711bb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7faeb6710770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.983 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.983 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7faeb6710440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T15:14:37.982854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.984 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T15:14:37.984434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7faeb67106b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.986 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.986 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.packets volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.986 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.986 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.986 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7faeb67106e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T15:14:37.986048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb67104a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T15:14:37.987477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.987 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7faeb6711eb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710530>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.988 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7faeb6710560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.989 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T15:14:37.988632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710590>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes volume: 18782 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7faeb67105f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T15:14:37.990121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb89cd5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7faeb67125d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T15:14:37.991372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6712600>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.992 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7faeb6711df0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T15:14:37.992677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/memory.usage volume: 46.5625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/memory.usage volume: 42.89453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T15:14:37.993968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7faeb6822330>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6710e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T15:14:37.995173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.995 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7faeb6711e50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6711e80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.997 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/network.incoming.bytes volume: 23129 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.997 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/network.incoming.bytes volume: 4475 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7faeb6710320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7faeb687be30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7faeb6995700>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T15:14:37.997032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 14 DEBUG ceilometer.compute.pollsters [-] e8b580f0-e687-4a7f-8bbf-6a63f53cf4b8/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T15:14:37.998416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.999 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.999 14 DEBUG ceilometer.compute.pollsters [-] 00943943-b19d-4862-8829-45a5cc14e988/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 15:14:37 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:37.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.000 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.001 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:38 compute-0 ceilometer_agent_compute[195337]: 2026-01-05 15:14:38.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 15:14:39 compute-0 nova_compute[185474]: 2026-01-05 15:14:39.435 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:39 compute-0 systemd[1]: Starting Hostname Service...
Jan 05 15:14:39 compute-0 nova_compute[185474]: 2026-01-05 15:14:39.538 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:39 compute-0 systemd[1]: Started Hostname Service.
Jan 05 15:14:42 compute-0 podman[254031]: 2026-01-05 15:14:42.799859738 +0000 UTC m=+0.097762430 container health_status 7f778f856fb1ab7eca39f5283472dee2b9e929775698a8f4406aabcc7d43bff1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '97ba70b331a91f88d5e5407234a97956a0e8a476d6a64852c20923add94f5c10-51782aec79924fec8a0e88fe627fc05aaf6bb307859368b48328ffaae28f3aa7-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 15:14:44 compute-0 nova_compute[185474]: 2026-01-05 15:14:44.438 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:44 compute-0 nova_compute[185474]: 2026-01-05 15:14:44.539 185478 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 15:14:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:14:44.834 107222 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 15:14:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:14:44.836 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 15:14:44 compute-0 ovn_metadata_agent[107217]: 2026-01-05 15:14:44.837 107222 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
